Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Oracle Database 11g: Implement Streams

Similar presentations


Presentation on theme: "Introduction to Oracle Database 11g: Implement Streams"— Presentation transcript:

1 Introduction to Oracle Database 11g: Implement Streams

2 Oracle Database 11g: Implement Streams I - 2
Objectives After completing this lesson, you should be able to describe: The overall course objectives The curriculum context Your learning aids Oracle Database 11g: Implement Streams I - 2

3 Oracle Database 11g: Implement Streams I - 3
Course Objectives After completing this course, you should be able to: Configure an Oracle Streams environment  Alter the Oracle Streams environment to add, modify, and drop new sites or objects  Configure conflict handling for data replication  Transform the data being replicated between two sites Enqueue and dequeue messages by using Oracle Streams Advanced Queueing Monitor the capture, propagate, and apply of messages  Perform basic troubleshooting of an Oracle Streams environment Course Objectives The aim of this course is to introduce you to the architecture and components that make up Oracle Streams. This course covers the foundation of an Oracle Streams system: Queues, logical change records (LCRs), and messages Object types Rules Streams tasks: capture, propagation, and apply Apply handlers Enqueuing messages and LCRs Conflict resolution Transformations Extending the Oracle Streams environment Oracle Database 11g: Implement Streams I - 3

4 Oracle Database 11g: Implement Streams I - 4
Suggested Schedule Topic Lessons Day Streams Fundamentals 1, 2, 3, 4 1 Streams Concepts and Architecture (Manual configuration, concluded by the end of the day) 5, 6, 7, 8, 9 2 Streams Concepts and Architecture (Additional capture, transformation and apply topics) 10, 11, 12, 13 3 Managing and Extending Streams 14, 15, 16, 17, 18 4 Operational Tips and Considerations 19, 20, 21 5 Streams Messaging 22, 23 5 Suggested Schedule The lessons in this guide are arranged in the order in which you will probably study them in the class. If your instructor teaches the class in the sequence in which the lessons are printed in this guide, the class should run approximately as shown in the schedule. Your instructor, however, may vary the sequence of the lessons for a number of reasons, including: Customizing material for a specific audience Covering a topic in a single day instead of splitting the material across two days Maximizing the use of course resources (such as hardware and software) Oracle Database 11g: Implement Streams I - 4

5 Oracle University Curriculum Overview
Oracle Database 11g: Implement Streams Oracle Database 11g: Administration Workshop II Oracle Database 11g: Administration Workshop I Oracle University Curriculum Overview By using Oracle Streams, you can share data and messages in a data stream, either within a database or from one database to another. You can also control what information is put into a stream, how the stream flows or is routed from one database to another, what happens to messages in the stream as they flow into each database, and how the stream terminates. This course: Helps you use Oracle Streams to build and operate distributed enterprises and applications, data warehouses, and high-availability solutions Is designed to give you practical experience in the configuration and maintenance of an Oracle Streams environment Note: The Oracle Database 11g: Administration Workshop I and Oracle Database 11g: Administration Workshop II courses can serve as prerequisites for this course. Oracle Database 11g: Implement Streams I - 5

6 Oracle Streams Documentation
Oracle Database 2 Day + Data Replication and Integration Guide for a task-oriented overview into Oracle Streams, materialized views, and other distributed database functionality Oracle Streams Concepts and Administration for general information about Oracle Streams Oracle Streams Replication Administrator's Guide for information about using Oracle Streams for replication Oracle Streams Advanced Queuing User's Guide for information about using Oracle Streams for message queuing Oracle Database PL/SQL Packages and Types Reference for information about the packages that you can use to configure and manage Oracle Streams Other documentation Other Documentation In addition to the Oracle Streams documentation, you may find the following references useful: Oracle Database Heterogeneous Connectivity Administrator's Guide for information about Oracle Database Gateway Oracle Database Advanced Replication and Oracle Database Advanced Replication Management API Reference for more information about materialized views Oracle Database Administrator's Guide for information about distributed SQL Oracle Database Reference for information about initialization parameters and data dictionary views Oracle Database 11g: Implement Streams I - 6

7 Oracle Database 11g: Implement Streams I - 7
Additional Resources To continue your learning: Oracle University (OU) Oracle Technology Network (OTN) Data Replication and Integration home page index.html Oracle by Example (OBE) Technical Support: Oracle MetaLink Additional Resources Oracle University offers different formats to best suit your needs: Instructor-Led Inclass Training Live Web Class Self-Study CD-ROMs Oracle Technology Network is a free resource with information about the core Oracle software products, including database and development tools. You can have access to: Technology centers Oracle Community including user groups Software downloads and code samples, and much more Data Replication and Integration is the Streams-specific home page on OTN. Oracle by Example is a set of hands-on, step-by-step instructions. Technical Support: Access to Oracle MetaLink is included as part of your annual support maintenance fees. In addition to the most up-to-date technical information available, MetaLink gives you access to: Service requests (SRs) Certification matrices Technical forums monitored by Oracle experts Software patches Bug reports Oracle Database 11g: Implement Streams I - 7

8 Oracle Database 11g: Implement Streams I - 8
Data Replication Information Sharing Oracle Streams Data recovery and high availability Oracle Data Guard Load balancing, failover Oracle Real Application Clusters Network load reduction Materialized views Data Replication Oracle Streams, a simple solution for information sharing (which is covered in this course), may seem like the perfect solution for your business needs, but there are other options to consider. The Oracle database is rich with features and optimizations, and there may be another solution that meets your business needs and goals. Oracle Data Guard Oracle’s primary solution to disasters is Oracle Data Guard. An Oracle Data Guard configuration is a collection of loosely connected systems consisting of a single primary database and up to nine standby databases that can include a mix of both physical and logical standby databases. If a failure occurs on the production (primary) database, you can fail over to one of the standby databases, which becomes the new primary database. Note: Data Guard SQL Apply mode and Oracle Streams are not supported on the same database. Data Guard REDO Apply can be used to provide disaster recoverability protection for Streams databases. Oracle Real Application Clusters Oracle Real Application Clusters (RAC) can be used in conjunction with Oracle Streams or Oracle Data Guard SQL Apply mode. RAC enables you to take advantage of clustered hardware by running multiple instances against the same database. Oracle Database 11g: Implement Streams I - 8

9 Oracle Database 11g: Implement Streams I - 9
Data Replication (continued) The RAC software manages data access so that changes are coordinated between instances, providing each instance with a consistent image of the database and enabling users and applications to benefit from the processing power of multiple machines. Materialized Views Oracle Streams is fully interoperational with distributed materialized views, or snapshots, which can be used to maintain updatable or read-only, point-in-time copies of data. They can be defined to contain a full copy of a table or a defined subset of the rows in the master table that satisfy a value-based selection criterion. Materialized views are periodically updated, or refreshed, from their associated master tables through transactionally consistent batch updates. Because materialized views do not require a dedicated connection, they are ideal for disconnected computing. For example, a company may choose to use updatable materialized views for the members of their sales force. A salesperson could enter orders into his or her laptop throughout the day, then simply dial up the regional sales office at the end of the day to upload these changes and download any updates. Oracle Database 11g: Implement Streams I - 9

10 Oracle Streams: Overview

11 Oracle Database 11g: Implement Streams I - 11
Objectives After completing this lesson, you should be able to: Define Oracle Streams List the three basic Streams elements: capture, propagation, and apply of messages  Describe examples of Oracle Streams implementation Configure databases for Streams, including: Setting database initialization parameters Configuring memory and database storage Configuring archive and supplemental logging Creating a Streams administrator Configuring communication between your databases Replicate a table by using the MAINTAIN_* procedure Oracle Database 11g: Implement Streams I - 11

12 Oracle Database 11g: Implement Streams I - 12
What Is Oracle Streams? Simple solution for information sharing Providing: Data replication Data warehouse loading Data provisioning in distributed and grid environments High availability during database upgrade, platform migration, and application upgrade Message queuing Message management and notification Interfaces: PL/SQL API commands Enterprise Manager graphical user interface (GUI) What Is Oracle Streams? Oracle Streams is a set of processes and database structures that enable you to share data and messages via a data stream. Data Replication Oracle Streams can efficiently capture the data manipulation language (DML) and data definition language (DDL) changes that are made to database objects and replicate those changes to one or more databases. You can share information between tables that differ in both structure and content either within the same database or across different databases. You can route the flow of data through multiple databases without having the changes applied at each intermediate server. Data Warehouse Loading Data warehouse loading is a special case of data replication. Some of the most critical tasks in creating and maintaining a data warehouse include refreshing data and adding new data from the operational databases. Oracle Streams can capture changes that are made to a production system and send those changes to a staging database, or directly to a data warehouse or operational data store. Support for data transformations and user-defined apply procedures gives the flexibility to reformat data or update warehouse-specific data fields during loading. In addition, Change Data Capture (which is a Data Warehousing feature) uses some of the components of Oracle Streams to identify data that has changed before loading it into a data warehouse. You can learn more about CDC in the Oracle Database 11g: Implement and Administer a Data Warehouse course. Oracle Database 11g: Implement Streams I - 12

13 Oracle Database 11g: Implement Streams I - 13
What Is Oracle Streams? (continued) Data Provisioning in Distributed and Grid Environments Many of the features of Oracle Streams that are useful for sharing information in a distributed environment are equally useful for data provisioning in a grid environment. Suppose that you have a production database and you need to perform data analysis. In a grid environment, you could add nodes to your production Real Application Clusters (RAC) database for additional CPU capacity to run the analysis. However, if there are departmental restrictions that prohibit running data analysis on the production database, you could replicate the tablespace data to a Streams replica and perform the analysis on that system. The File Groups Repository feature enables storage of point-in-time copies of file and tablespace data for reporting, auditing, and information sharing. Groups of files (for example, tablespaces) can be copied to the repository, or from the repository to another database or file system, without affecting access to production data. This feature uses procedures in the DBMS_STREAMS_TABLESPACE_ADM package. A File Group Repository provides a method for you to deliver information when and where it is needed within your grid environment. Database Upgrade, Platform Migration, and Application Upgrade Streams can provide high availability (that is, it allows users to continue working on a production database) while a database upgrade or migration is performed to a second database. This is possible because databases that use the Oracle Streams technology do not need to be identical. There are multiple ways to instantiate this second database: Recovery Manager (RMAN), Transportable Tablespaces (TTS), and Data Pump clients (including import and export). Instantiation is the process of creating and synchronizing a duplicated database object. Participating databases can maintain different data structures by using Streams to transform the data into the appropriate format. These transformations can be used to change the data-type representation of a particular column in a table, change the name of a column in a table, or change a table name or table owner. These capabilities can be used to facilitate application upgrades. For example, using Oracle Transportable Tablespaces and Oracle Streams, you can migrate applications to a new database version or platform. With a single command, you can identify a set of tablespaces in one database, ship the tablespace set to another database (even if the second database is on a different operating system or platform), and include this set in the second database. During this time, both the source and destination databases are open and available for any user activity. Meanwhile, Oracle Streams has begun to capture from the source database any changes that occur during the tablespace copy to the destination database. After the tablespace set is available at the destination, the captured changes are applied to synchronize the data with the source database. No down time is required. Message Queuing Oracle Streams enables user applications to: Enqueue messages of different types Propagate the messages to subscribing queues Notify user applications that messages are ready for consumption Dequeue messages at the destination database Oracle Database 11g: Implement Streams I - 13

14 Oracle Database 11g: Implement Streams I - 14
What Is Oracle Streams? (continued) Oracle Streams Advanced Queuing Oracle Streams Advanced Queuing (AQ) supports all the standard features of message-queuing systems, including multiple-consumer queues, publishing and subscribing, content-based routing, Internet propagation, transformations, and gateways to other messaging subsystems. Message Notification Business messages are valuable communications between applications or organizations. You can configure queues to retain explicitly enqueued messages after consumption for a specified period of time, thereby enabling the use of AQ as a business message–management system. AQ stores all messages in the database in a transactional manner so that they can be audited and tracked. You can use this audit trail to extract intelligence about the business operations. The ability to capture messages and propagate them to relevant consumers based on rules enables you to use Oracle Streams for message notification. Messages staged in a queue may be dequeued by a messaging client or an application, and then actions can be taken based on these messages, which may include an notification or passing the message to a wireless gateway for transmission to a cell phone or pager. You can easily find examples of message notification on the Internet, such as subscribing to travel service notifications of flight delays or registering for price-drop alerts at an Internet store. You may also find notifications useful in a customer relationship management (CRM) application (for example, a sales manager wanting to be notified when an important customer makes a purchase). Interfaces You learn about both—the PL/SQL and the GUI interface—in this course. Oracle Database 11g: Implement Streams I - 14

15 Oracle Database 11g: Implement Streams I - 15
Streams: Overview Source database Target database Messages in queue Messages in queue Propagate Changed database object Changed database object Capture Apply Redo logs Streams: Overview A stream is a flow of information either within a database or from one database to another. A message is the smallest unit of information that is inserted into and retrieved from a queue. A message consists of data (message payload) as well as information to govern the interpretation and use of the message data (that is, data and metadata). You can stage two types of messages: DDL or DML changes of database objects, formatted as a logical change record (LCR) User-created messages The three basic elements of Oracle Streams are: Capture: To automatically capture DML or DDL events either from the redo or as part of the user transaction. Additionally, a user-supplied application can be used to create messages that can be placed into a queue via an explicit enqueue operation. Propagate: To store and propagate messages between databases. Propagation can be performed explicitly if needed. Apply: To apply messages (DML or DDL) to a destination database or to pass the messages to an application By configuring these elements, you can control what information is put into a stream, how the stream flows from node to node, what happens to messages in the stream as they flow into each node, and how the stream terminates. This can occur in a single or in multiple instances. You can customize each task to address specific requirements and business needs. Oracle Database 11g: Implement Streams I - 15

16 Oracle Database 11g: Implement Streams I - 16
Capture Multiple modes of capture: Implicit, asynchronous capture Redo-based capture of DML and DDL changes either locally or remotely at a downstream database Extracting changes from the redo (as it is written) Capture from log buffer, online redo, or archived log files Implicit, synchronous capture Capture of DML changes for specific tables as part of the executing transaction (always locally) Stored persist on disk Explicit capture: Direct enqueue of user messages Capture Capture Streams can capture changes asynchronous to the user activity by extracting changes from the redo log. The redo-based capture is performed as a background process in the database. You can start and stop this process as necessary. In a normal operation, if the capture process is running when the database is shut down, it automatically restarts on database restart. Additionally, it is possible to offload the capture processing to another database to minimize the impact on your production database. You can configure Streams to capture changes locally at a source database using the redo logs or remotely at a downstream database. If a capture process runs on a downstream database, the redo log files from the source database are copied to the downstream database, and the capture process captures the changes from these log files. For capture that is running locally at the source database, the capture process retrieves changes from the log buffer, the online redo log, or the archived log—moving seamlessly between these areas without any additional configuration. For environments that do not support redo-based capture (such as the Standard Edition of the database), it is possible to capture changes as part of the user transaction activity. This form of capture is known as Synchronous Capture because the capture is synchronous (at the same time) with the user transaction. It uses an efficient internal mechanism to capture the changes and store them persistently (that is, on disk) in a queue. Explicit capture is the enqueue of user messages into the queue. Applications can explicitly generate messages and place them in the staging area. Oracle Database 11g: Implement Streams I - 16

17 Oracle Database 11g: Implement Streams I - 17
Logical Change Record Reformatting changes into a logical change record (LCR) Row change: Old and new values; tag, transaction ID, SCN Object name, owner, command type, source database name Optional attributes: Username, session, thread, and so on Object name, owner, type, tag, transaction ID, SCN Logon user, current schema, base table owner and name DDL text, command type, source database name Multiple LCRs per LOB or LONG column Piecewise chunks Enqueuing messages (with the LCR) to a single queue for redo-based capture or to a queue table for synchronous capture DML DDL LOB Logical Change Record A capture process reformats the changes that are captured from the redo log into an LCR. An LCR has an internal format that describes a database change: a DML or DDL change. For DML, an LCR contains a change to the data in a single row or a change to a single LONG, LONG RAW, or LOB column in a row (sometimes referred to as a row record). A single DML statement can insert, update, merge, or delete multiple rows. Therefore, a single DML statement can produce multiple row LCRs. In addition, an update to a LONG, LONG RAW, or LOB column in a single row may result in multiple row LCRs. For DDL, an LCR describes changes to the structure of the database. A DDL statement can create, alter, or drop a database object. Streams can also capture information about changes beyond the actual row change, such as the username or session. However, these optional attributes require additional configuration. After capturing an LCR, a capture process enqueues a message containing the LCR into a queue. A capture process is always associated with a single queue, and it enqueues messages only into this queue. You can associate multiple capture processes with a single queue, but it is recommended that only one capture process be assigned to a queue. If you use multiple capture processes, there should be a queue for each process. LCRs captured by a synchronous capture are stored persistently in the queue table. It is also possible to manually create your LCRs. Oracle Database 11g: Implement Streams I - 17

18 Staging and Propagating Messages
Staging messages in the SYS.AnyData queue Capture process > buffered queue (can spill from memory to disk) Synchronous capture > persistent queue table User-enqueued (also non-LCRs) > disk (by default) or buffered queue, if configured Propagating messages from one queue to another Consuming messages by subscribers Placing messages in error queue Instance SGA Streams pool Buffered queue Spill Capture Staging and Propagating Messages Published messages are stored (or staged) in a queue. Oracle Streams uses two types of queues: a queue of the SYS.AnyData type and an error queue (also referred to as an “exception queue”). Both the Streams capture and the synchronous capture enqueue messages into a SYS.AnyData queue. A SYS.AnyData queue can stage messages of different types (for example, DML and DDL LCRs) as long as the message payload is wrapped as an ANYDATA PL/SQL construct. Users and applications may enqueue events into a SYS.AnyData queue or into a typed queue. A typed queue can stage messages of only one specific type. Redo-based capture publishes messages into an in-memory staging area—that is, into the Streams pool of the System Global Area (SGA). The Streams pool is also used as the memory pool for each of the Streams processes (capture, apply). You can control the amount of memory available for all Streams processing by setting the size of the Streams pool. Buffered queues in the Streams pool improve performance for enqueuing and dequeuing messages. If the Streams pool size is insufficient, the buffered queue messages may spill to disk to free up memory. Spilled messages are written to disk in the associated spillover queue, named AQ$_queue_table_name_p table, where queue_table_name_p is the name of the queue table for the queue. Synchronous capture stores its captured messages directly to a disk queue. Redo logs Oracle Database 11g: Implement Streams I - 18

19 Oracle Database 11g: Implement Streams I - 19
Staging and Propagating Messages (continued) User-enqueued messages are by default stored on disk. You can configure buffered messaging for user-enqueued messages. All messages (LCRs and user messages) can be staged in the same queue. A propagation transmits messages from one queue to another, whether within the same database or across multiple databases. In-memory queues are propagated to their in-memory destination queues, even if some buffered messages have spilled to disk. Messages persistently stored in queues, such as the synchronous capture messages, are propagated to a persistent target queue on disk. Messages remain in the staging area until consumed by all subscribers. (Subscribers are other staging areas or Streams processes.) If a message cannot be consumed for any reason, it is placed in an error queue. Streams processes and queues can be created, altered, started, stopped, and dropped in SQL*Plus or in Enterprise Manager > Data Movement > Manage (in the Streams section). Oracle Database 11g: Implement Streams I - 19

20 Oracle Database 11g: Implement Streams I - 20
Applying Messages Directly applying the DML or DDL changes represented in the LCR (default) Explicit dequeuing via open interface applications, such as JMS, C, OCI, or PL/SQL Automatic conflict detection with optional resolution: Unresolved conflicts are placed in an error queue. Transactions can be reapplied or deleted from an error queue. Customizable apply processing with handlers Apply Applying Messages The apply process applies by default the DML and DDL changes represented in the LCRs. It can apply to a local Oracle table or via a database link to a non-Oracle table. Messages in the staging area can be consumed: Implicitly (or directly) by an apply process Explicitly by an application performing the dequeue process via open interfaces, such as Java Message Service (JMS), C, Oracle Call Interface (OCI), or PL/SQL The default apply process detects conflicts where the destination row has been changed and does not contain the expected values. Conflicts are possible in a Streams configuration where data is shared between multiple databases. For example, a transaction at the source database updates a row at nearly the same time as a different transaction updates the same row at a destination database. There are no locks in a distributed environment to prevent such conflicts, so the system must automatically detect them. In this case, if data consistency between the two databases is important, an apply process must be instructed either to keep the change at the destination database or to replace it with the change from the source database. When data conflicts occur, you need a mechanism to ensure that the conflict is resolved in accordance with your business rules. Streams automatically detects conflicts and, for update conflicts, tries to use a conflict resolution handler to resolve them. Oracle Database 11g: Implement Streams I - 20

21 Oracle Database 11g: Implement Streams I - 21
Applying Messages (continued) Streams offers a variety of prebuilt handlers that enable you to define conflict resolution for your database. If the prebuilt conflict resolution handlers cannot resolve a situation, you can build and use your own customized conflict resolution handlers (in an error handler or DML handler). Unresolved conflicts are placed in the error queue (which is automatically created, when you create a Streams queue). To view information about transactions in the error queue, query the DBA_APPLY_ERROR data dictionary view. A transaction may include many messages, and for an unhandled error the apply process automatically copies all messages in the transaction to the error queue. The error queue contains only local errors. When the condition that caused the error has been resolved, you can either reexecute the transaction in the error queue with the EXECUTE_ERROR procedure or you can delete the transaction from the error queue by using the DELETE_ERROR procedure. Both these procedures are in the DBMS_APPLY_ADM package. Oracle Database 11g: Implement Streams I - 21

22 Oracle Database 11g: Implement Streams I - 22
Rules Are evaluated rules by a rules engine to perform actions Limit which messages are captured, propagated, or applied Are similar to the condition in the WHERE clause of a SQL query Are grouped into a positive set (inclusion) and a negative set (exclusion of objects) Rule Sets Rules Propagate Rules Apply Rules Rules Streams enables you to control which information to share and where to share it by using rules. A rule is a database object that enables a client to perform an action when a message occurs and a condition is satisfied. Rules are evaluated by a rules engine, which is a built-in part of the Oracle database. Both user-created applications and Oracle Database features (such as Streams) can be clients of the rules engine. Each rule is specified as a condition that is similar to the condition in the WHERE clause of a SQL query. For example, you could specify the rule condition owner = 'HR' and table_name = 'EMPLOYEES'. If a rule evaluates to TRUE for a message, the message is passed to the Streams process for further action. To simplify management for client applications, rules are grouped into rule sets. In Oracle Streams, rule sets may be positive or negative to simplify the inclusion and exclusion of objects. Each of the following Streams components is a client of a rules engine and can have two rule sets (one positive and one negative) associated with it: Capture process Propagation job Apply process Capture Oracle Database 11g: Implement Streams I - 22

23 Rule-Based Transformations
Transformations can be performed during: Capture to format messages for all destination databases Propagation to subset data before it is sent to a remote site Apply to format messages for a specific database Declarative Transformations: Rename a schema Rename a table or column Add or delete a column Custom Transformations: User-supplied PL/SQL function BEGIN DBMS_STREAMS_ADM.RENAME_SCHEMA( rule_name => 'STRMADMIN.HR51', from_schema_name => 'HR', to_schema_name => 'HR_REPL', operation => 'ADD'); END; Rule-Based Transformations A transformation can occur at the following times: When the capture process enqueues a message: You use this transformation to format a message in a manner appropriate for all destination databases. During the propagation of an event: You use this transformation to subset data before it is sent to a remote site When an apply process or messaging client dequeues a message: You use this transformation to format a message for a specific destination database. An apply process may apply the transformed message directly or send the transformed message to an apply handler for processing. Oracle Streams supports declarative and custom transformations. Example of viewing a declarative transformation in the data dictionary: SELECT rule_owner||'.'||rule_name rule, transform_type, from_schema_name,to_schema_name FROM DBA_STREAMS_TRANSFORMATIONS; RULE TYPE FROM TO STRMADMIN.HR51 DECLARATIVE TRANSFORMATION HR HR_REPL Oracle Database 11g: Implement Streams I - 23

24 Oracle Streams Database Configuration
Set database initialization parameters. Ensure that the Oracle Streams memory requirements are met (Set STREAMS_POOL_SIZE to 200 MB or higher). Configure archive logging. Configure supplemental logging. Configure database storage in an Oracle Streams database: Configure a separate tablespace for the Oracle Streams administrator. Use a separate queue for capture and apply Oracle Streams clients. Grant user privileges to the Oracle Streams administrator. Configure communication between your databases. Oracle Streams Database Configuration The following are setup tasks, which you normally perform only once. However, if your Streams usage grows, you may need to revisit your database configuration. The following pages highlight the details that are relevant to most environments. You can modify parameters in SQL*Plus or Enterprise Manager > Server > Initialization Parameters (in the Database Configuration section). You can configure database storage in SQL*Plus or Enterprise Manager > Server > Tablespaces (in the Storage section). You can also configure queues in SQL*Plus or Enterprise Manager > Data Movement > Setup (in the Streams section) and also Manage (in the Streams section). For your Oracle Streams replication environment to run properly and efficiently, follow best practices when you are configuring the environment. You can see the list of best practices in the slide. For additional details, see the Oracle Streams Replication Administrator’s Guide. Oracle Database 11g: Implement Streams I - 24

25 Configuring Database Parameters
Common Source Propagation COMPATIBLE LOG_ARCHIVE_CONFIG GLOBAL_NAMES PROCESSES LOG_ARCHIVE_DEST* OPEN_LINKS STREAMS_POOL_SIZE LOG_ARCHIVE_FORMAT SGA_MAX_SIZE LOG_BUFFER SGA_TARGET UNDO_RETENTION TIMED_STATISTICS Configuring Database Parameters Before implementing Oracle Streams, you must first configure certain database parameters. The database parameters must be configured properly at each site to ensure that Oracle Streams is able to acquire the resources it needs to function reliably and to improve performance. The database parameters that must be set at each site include: COMPATIBLE: If possible, set this parameter to or higher. PROCESSES: This parameter must be set to a value large enough to accommodate all the Oracle Streams background processes used by capture, apply, and propagation for each site, as well as the processes required by other database applications. The default value (derived from other parameters) is adequate for most systems. The PROCESSES parameter is used to calculate the SESSIONS parameter, which specifies the maximum number of sessions. ((1.1 * PROCESSES) + 5) STREAMS_POOL_SIZE: This parameter configures a separate pool in the System Global Area (SGA) to be used exclusively for Streams operations. If you enable either Automatic Memory Management (ASM) or Automatic Shared Memory Management (ASMM), the value of STREAMS_POOL_SIZE acts as a minimum value for the automatic memory tuning. If the SGA_MAX_SIZE parameter is explicitly set in the parameter file, you must increase the value of SGA_MAX_SIZE appropriately when you increase the size of the STREAMS_POOL_SIZE parameter. For a database that has a capture process, you must initially set this parameter to 200 MB and then tune upward as needed. Oracle Database 11g: Implement Streams I - 25

26 Oracle Database 11g: Implement Streams I - 26
Configuring Database Parameters (continued) SGA_MAX_SIZE: This parameter specifies the maximum size of the SGA for the lifetime of the instance. If you increase the size of the Streams pool or the Shared pool, you may also need to increase the size of this parameter. SGA_TARGET: This parameter specifies the total size of all SGA components. If this parameter is set to a nonzero value, the size of the Streams pool is managed by ASMM. TIMED_STATISTICS: To enable better monitoring of Streams performance, this parameter must be set to TRUE. The default value of this parameter is determined by the STATISTICS_LEVEL parameter: If STATISTICS_LEVEL is set to TYPICAL or ALL, TIMED_STATISTICS is set to TRUE. If STATISTICS_LEVEL is set to BASIC, TIMED_STATISTICS is set to FALSE. The database parameters that are specific to source sites include: LOG_ARCHIVE_CONFIG: Enables or disables the sending and receiving of redo logs. To use downstream capture and copy, specify DB_UNIQUE_NAME of the source database and the downstream database using the DG_CONFIG attribute, which also specifies the unique database names in a Data Guard configuration. LOG_ARCHIVE_DEST_n: Defines up to 10 log archive destinations. For downstream capture, you must configure at least one archive log destination parameter at the source site that points to the downstream capture site. LOG_ARCHIVE_DEST_STATE_n: Specifies the availability of the destination LOG_ARCHIVE_FORMAT: Specifies the file name format (AMER_%t_%s_%r.arc) %s corresponds to the sequence number . %r corresponds to the reset logs ID that ensures that unique names are constructed for the archived redo logs across multiple incarnations of the database. %t, which is required for Real Application Clusters configurations, corresponds to the thread. LOG_BUFFER: Specifies the amount of memory used when buffering redo entries to a redo log file. The parameter is OS dependent. It is set to the greater of 512 KB or 128 KB * CPU_COUNT. Performance Tip: If a capture process is running on the database, set this parameter so that the capture process reads redo log records from the redo log buffer rather than from the hard disk. UNDO_RETENTION: Ensure that this parameter is set to specify an adequate undo retention period, such as 3600, for databases that contain one or more capture processes. When instantiating tables, the Oracle Flashback feature is used to automatically ensure that the exported data and the exported procedural actions for each object are consistent to a single point in time. The UNDO_RETENTION setting must be large enough to support the export of the database objects to a consistent SCN. The database parameters that are specific to propagation include: GLOBAL_NAMES: Set to TRUE at each site where information is shared. This parameter requires a database link name to match the name of the host to which it connects. OPEN_LINKS: Because this parameter limits the number of database links that can be opened concurrently by a session, you may need to increase the default value (which is 4) at sites that contain more than three propagations or downstream capture processes. You may also need to increase the value of this parameter if database links are used for other database operations at a propagation or downstream capture site. Oracle Database 11g: Implement Streams I - 26

27 Memory Requirements for Streams
Ensure that the Oracle Streams memory requirements are met to start Streams components. Each queue: 10+ MB Each queue: 10+ MB Each propagation: 1+ MB Each capture process parallelism: 10+ MB Each apply process parallelism: 1+ MB Streams Pool SGA Memory Requirements for Streams The following are the memory requirements to start Oracle Streams components: Each queue requires at least 10 MB of memory. The amount of storage required is based on the number and size of the LCRs. Each capture process parallelism requires at least 10 MB of memory. The parallelism capture process parameter controls the number of processes used by the capture process to capture changes. You may be able to improve capture process performance by adjusting the capture process parallelism. Each propagation requires at least 1 MB of memory. You can use the V$STREAMS_POOL_ADVICE dynamic performance view to determine an appropriate setting. Parallelism Each apply process parallelism requires at least 1 MB of memory. The parallelism apply process parameter controls the number of processes used to apply changes. You may be able to improve the apply process performance by adjusting the apply process parallelism. For example, if parallelism is set to three for a capture process, increase the Streams pool by 30 MB. If parallelism is set to five for an apply process, increase the Streams pool by 5 MB. In addition to these start requirements for the Streams pool, you need to add the Streams storage requirements, for example, for the queue tables. Instance Add Streams storage requirements. Oracle Database 11g: Implement Streams I - 27

28 Configuring Archive Logging
SGA Must be enabled at all source sites Is used for all redo-based capture processes: local and downstream (not for synchronous capture) Provides seamless transitions in reading Requires availability of logs on disk (until they are no longer needed by any capture process) Redo log buffer 1 Local capture process 2 Online Redo Log 3 Archived Redo Log Example: Configure log destination ALTER SYSTEM SET log_archive_dest_2 = . . .; ALTER SYSTEM SET log_archive_dest_state_2 = 'ENABLE'; ALTER SYSTEM SET log_archive_format='<SID>_%t_%s_%r.arc' Configuring Archive Logging A local capture process works from the redo buffer or online redo logs whenever possible, and from the archived redo log files otherwise. The database must be in ARCHIVELOG mode to create or start a capture process. When a capture process falls behind, there is a seamless transition from accessing the buffer or reading an online redo log to reading an archived redo log, and, when a capture process catches up, there is a seamless transition from reading an archived redo log to reading an online redo log or accessing the log buffer. To implement redo-based capture, you must configure the LOG_ARCHIVE_DEST_* parameters at the source database for local capture, and at both the source and downstream databases for downstream capture. For Streams, it is recommended that you maintain these log files separate from the DB_RECOVERY_FILE_DEST location.   Example: log_archive_dest_1='LOCATION=$ORACLE_BASE/oradata/<SID>/archive/ MANDATORY' You can query the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view to determine the required checkpoint SCN for a capture process. When the capture process is restarted, it scans the archived redo log file from the required checkpoint SCN, forward. Therefore, the archived log file that includes the required checkpoint SCN, and all subsequent archived log files, must be available to the capture process. Oracle Database 11g: Implement Streams I - 28

29 Configuring Supplemental Logging
Places additional column data into a redo log whenever a DML operation is performed Must be configured at the source site For applying destination columns (details in notes) Can be specified at the database level or table level Database and table level: Identification key logging for primary key columns, unique index columns, foreign key columns, or all columns Table level: For specific nonkey columns of a particular table with a user-defined supplemental log group col1(PK) :2457 col5(old): 4 col5(new): 5 UPDATE orders SET order_status=5 WHERE order_id=2457; Configuring Supplemental Logging Supplemental logging puts additional column data in a redo log when a DML operation is performed on the table. The example in the slide shows the primary key (PK) value, and the old and new column values for an UPDATE command. The capture process takes this additional information and places it in the LCRs. Supplemental logging is always configured at the source database. The type of logging determines which old values are logged for a change. You can specify supplemental logging at the database or table level. Identification key logging (at the database or table level) is the logging of before images for a specified set of columns (primary key, unique key, foreign key, or all). Table-level supplemental logging can be configured using identification key logging, or by listing specific columns in a user-defined supplemental log group. You must enable supplemental logging at the source database to apply the following types of destination columns: Columns at the source database that have unique indexes at the target site must be conditionally logged. Redo logs Oracle Database 11g: Implement Streams I - 29

30 Oracle Database 11g: Implement Streams I - 30
Configuring Supplemental Logging (continued) Any columns at the source database that are used in a primary key in tables for which changes are applied at a destination database must be unconditionally logged in a log group or by database supplemental logging of primary key columns. Any columns at the source database that are used in substitute key columns for an apply process at a destination database must be unconditionally logged. Any columns at the source database that are used by a DML handler or error handler at a destination database must be unconditionally logged. Any columns at the source database that are used by a rule or a rule-based transformation must be unconditionally logged. If you specify row subsetting for a table at a destination database, any columns at the source database that are in the destination table, or columns at the source database that are in the subset condition, must be unconditionally logged. If the parallelism of any apply process is greater than one, any unique constraint column, foreign key column, or bitmap index column (that comes from multiple source columns) must be conditionally logged. If these three types of columns come from a single source column, supplemental logging is not needed. If multiple columns are used in a column list for conflict resolution during apply, they must be conditionally logged. If you do not use supplemental logging for these types of columns at a source database, changes involving these columns may not apply properly at a destination database. Supplemental logging is enabled by default at the table level for key columns as part of the instantiation preparation (with the PREPARE_*_INSTANTIATION procedure). Additional supplemental logging must be manually configured for any additional columns that require supplemental logging. Note: LOB, LONG, LONG RAW, and user-defined data type columns cannot be part of a supplemental log group. See the Oracle Streams Replication Administrator’s Guide for more details. Oracle Database 11g: Implement Streams I - 30

31 Database Supplemental Logging
Two types of database logging: Minimal supplemental logging Identification key logging Databasewide supplemental logging: Logs extra redo for all tables in the database Is easier to manage and maintain than table-level supplemental logging Database-level identification key logging methods: FOREIGN KEY ALL ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE) COLUMNS; Database Supplemental Logging There are two types of database-level supplemental logging: Minimal supplemental logging: Minimal amount of information needed for LogMiner to identify, group, and merge the redo operations associated with the DML changes Identification key logging: Logging of before images for a specified set of columns (primary key, unique key, foreign key, or all) Using database identification key logging, you can enable databasewide before-image logging for all updates. The example in the slide enables supplemental logging of all primary key and unique key values throughout the database, regardless of whether any of the individual columns are modified within the transaction. You may choose this option if you configure a capture process to capture changes to an entire database. If your primary and unique key columns are the same for all source and destination databases, executing the command shown in the slide at the source database provides the necessary supplemental logging at all destination databases. You can also specify: The FOREIGN KEY option to have all columns of a row’s foreign key placed in the redo log file if any column belonging to the foreign key is modified The ALL option to have all columns of that row (except for LOB, LONG, LONG RAW, and user-defined type columns) placed in the redo log file when the row is changed Oracle Database 11g: Implement Streams I - 31

32 Oracle Database 11g: Implement Streams I - 32
Database Supplemental Logging (continued) Supplemental logging statements are cumulative. If you issue two consecutive ALTER DATABASE ADD SUPPLEMENTAL LOG DATA commands, each with a different identification key, both keys are supplementally logged. Oracle Database 11g: Implement Streams I - 32

33 Table Supplemental Logging
Can be configured by: Using table-level identification key clauses to generate either unconditional or conditional log groups Creating a supplemental log group and specifying individual column names ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; ALTER TABLE hr.departments ADD SUPPLEMENTAL LOG GROUP dept_cols (department_id, department_name) ALWAYS; Table Supplemental Logging Table supplemental logging specifies, at the table level, which columns are to be supplementally logged. There are two types of supplemental table log groups: Unconditional log group: Logs the before images of specified columns when the table is updated, regardless of whether the update affected any of the specified columns. An unconditional log group is sometimes called an “always log group.” Conditional log group: Logs the before images of all specified columns only if at least one of the columns in the log group is updated You can use the identification key logging options to generate either an unconditional log group or a conditional log group: ALL PRIMARY KEY FOREIGN KEY UNIQUE Table supplemental logging for key columns (primary key, unique index, foreign key) is automatically enabled as part of a capture “prepare for instantiation” process. You can configure additional supplemental logging for other columns with the ALTER TABLE statement. ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; Oracle Database 11g: Implement Streams I - 33

34 Oracle Database 11g: Implement Streams I - 34
Table Supplemental Logging (continued) If you specify multiple options in a single ALTER TABLE statement, a separate conditional log group is created for each option. You can also create user-defined conditional and unconditional supplemental log groups to log supplemental information. It is recommended that you define supplemental logging using identification keys because this method enables automatic management of the supplemental log group. The first example in the slide creates a log group for the primary key columns of the HR.EMPLOYEES table. If a primary key does not exist for the table, the database uses the smallest UNIQUE index with at least one NOT NULL column and uses the columns in that index. If there is neither a primary key nor a unique index defined for the table, the database supplementally logs all scalar columns of the table. The second example in the slide creates an unconditional log group named dept_cols on the HR.DEPARTMENTS table, which consists of the department_id and department_name columns. These columns are logged every time an UPDATE statement is executed on HR.DEPARTMENTS, regardless of whether the update affected any supplementally logged column. The third example affects all columns. When you enable identification key logging at the database level, minimal supplemental logging is enabled implicitly. However, when you specify identification key logging at the table level, only the specified table is affected. Minimal supplemental logging does not impose significant overhead on the database generating the redo log files. However, enabling database-wide identification key logging can impose overhead on the database generating the redo log files. You must configure supplemental logging only on those columns for which Oracle Streams requires supplemental logging. For more information about supplemental logging, refer to the Oracle Database Utilities manual. Oracle Database 11g: Implement Streams I - 34

35 Determining Enabled Supplemental Logging Methods
Database level: Table level: SELECT SUPPLEMENTAL_LOG_DATA_MIN MIN, SUPPLEMENTAL_LOG_DATA_PK PK, SUPPLEMENTAL_LOG_DATA_UI UI, SUPPLEMENTAL_LOG_DATA_FK FK, SUPPLEMENTAL_LOG_DATA_ALL "ALL" FROM V$DATABASE; SELECT owner, table_name, log_group_type FROM DBA_LOG_GROUPS; Determining Enabled Supplemental Logging Methods To determine the current settings for supplemental logging at the database level, query the V$DATABASE view. Each column shown in the query contains the value YES if that specific database-level identification key logging option is enabled. The query shown in the slide returns an output similar to the following: MIN PK UI FK ALL IMPLICIT YES NO YES NO To determine the current settings for supplemental logging at the table level, query the DBA_LOG_GROUPS view. The query shown in the slide returns an output similar to: OWNER TABLE_NAME LOG_GROUP_TYPE HR EMPLOYEES ALL COLUMN LOGGING HR DEPARTMENTS PRIMARY KEY LOGGING HR DEPARTMENTS FOREIGN KEY LOGGING To determine which specific columns in the table are being supplementally logged, you can query the DBA_LOG_GROUP_COLUMNS view. Oracle Database 11g: Implement Streams I - 35

36 Oracle Database 11g: Implement Streams I - 36
Determining Enabled Supplemental Logging Methods (continued) You can also query the configured supplemental logging information from the following views: [ALL | DBA]_CAPTURE_PREPARED_TABLES [ALL | DBA]_CAPTURE_PREPARED_SCHEMAS [ALL | DBA]_CAPTURE_PREPARED_DATABASE For example: COLUMN supplemental_log_data_pk HEADING 'PK' COLUMN supplemental_log_data_ui HEADING 'Uniq' COLUMN supplemental_log_data_fk HEADING 'FK' COLUMN supplemental_log_data_all HEADING 'All' SELECT * FROM ALL_CAPTURE_PREPARED_SCHEMAS WHERE schema_name = 'HR'; SCHEMA_NAME TIMESTAMP PK Uniq FK All HR 05-DEC-05 IMPLICIT IMPLICIT IMPLICIT NO Oracle Database 11g: Implement Streams I - 36

37 Streams Administrator (and Database Storage)
Performs administrative functions in a Streams environment Must exist at all Streams sites Needs a default and temporary tablespace other than SYSTEM Requires the DBA role May be granted other privileges explicitly or with the DBMS_STREAMS_AUTH package CREATE USER strmadmin IDENTIFIED BY streams DEFAULT TABLESPACE STRM_TBS1 TEMPORARY TABLESPACE TEMP; CREATE DIRECTORY scripts AS '/oracle/scripts'; GRANT DBA TO strmadmin; EXEC DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE( - 'STRMADMIN', TRUE); Streams Administrator (and Database Storage) To manage a Streams environment, you must either create a user with the appropriate privileges or grant these privileges to an existing user. You must not use the SYS or SYSTEM user as a Streams administrator, and the Streams administrator must not use the SYSTEM tablespace as its default tablespace. The recommended, separate tablespace of the Streams administrator is used for queues (error, spill, and persistent). The Streams administrator must be specified at each database that participates in the Streams environment. Grant the Streams administrator the DBA role so that this user can connect to the database and manage different types of database objects in various schemas. Depending on your configuration, you may find it necessary to grant the Streams administrator the following additional privileges: The SELECT_CATALOG_ROLE privilege if you want to grant the user privileges to query non-Streams data dictionary views The SELECT ANY DICTIONARY privilege if you plan to use the Streams tool in the Oracle Enterprise Manager Console The EXECUTE privilege on any object types that the Streams administrator may need to access You can create and modify users in SQL*Plus or Enterprise Manager > Server > Users (in the Security section). Oracle Database 11g: Implement Streams I - 37

38 Oracle Database 11g: Implement Streams I - 38
Streams Administrator (and Database Storage) (continued) The DBMS_STREAMS_AUTH package can be used to grant and revoke privileges that are needed to manage Streams. The GRANT_ADMIN_PRIVILEGE procedure enables a user to perform all setups (capture, apply, propagation, queue, rules, and so on) for the Streams environment and to monitor the system performance. You must be connected as a SYSDBA user to grant all the necessary privileges. If grant_privileges is TRUE, the grantee is added to the DBA_STREAMS_ADMINISTRATOR dictionary view. BEGIN DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE( grantee => 'STRMADMIN', grant_privileges => FALSE, file_name => 'make_admin.sql', directory_name => 'scripts'); END; / You can also use the GRANT_ADMIN_PRIVILEGE procedure to generate a script that can be used to customize the privileges granted to the Streams administrator at each site. If grant_privileges is FALSE, the GRANT statements are not executed. The GRANT_ADMIN_PRIVILEGE procedure then creates a file with the specified name in the location pointed to by directory_name and opens the file in append mode. An error is raised if grant_privileges is FALSE and file_name is NULL. Writing the GRANT statements to a file is optional. But before using this procedure to generate a script, you must create a directory object. Using Directory Objects A directory object specifies an alias for a directory on the server’s file system. The Streams Administrator must have access to the file system directory to which the directory object is pointing. Use the CREATE DIRECTORY command to create the directory specified by directory_name. Directory objects are used by Data Pump when it instantiates the replication of objects. That is, Data Pump export and import need to have access to an OS directory to place files, such as .LOG files. Oracle Database 11g: Implement Streams I - 38

39 Managing Streams Administrators
SELECT * FROM DBA_STREAMS_ADMINISTRATOR; USERNAME LOCAL_PRIVILEGES ACCESS_FROM_REMOTE STRMADMIN YES NO EXEC DBMS_STREAMS_AUTH.REVOKE_ADMIN_PRIVILEGE( grantee => 'STRMADMIN'); Managing Streams Administrators The DBA_STREAMS_ADMINISTRATOR view lists the users who have been granted privileges directly through the DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE procedure. This view has three columns: username: Name of the Streams administrator local_privileges: Boolean variable for which YES indicates that the user was granted privileges through the GRANT_ADMIN_PRIVILEGE procedure access_from_remote: Boolean variable for which YES indicates that the user has been granted some privileges through the GRANT_REMOTE_ADMIN_ACCESS procedure in DBMS_STREAMS_AUTH A Streams administrator must have ACCESS_FROM_REMOTE privileges for some Streams remote administration operations through a database link, such as creating a downstream capture process with use_database_link set to TRUE. The GRANT_REMOTE_ADMIN_ACCESS procedure is called automatically by the GRANT_ADMIN_PRIVILEGE procedure. The REVOKE_ADMIN_PRIVILEGE procedure of DBMS_STREAMS_AUTH removes the specified Streams administrator from the DBA_STREAMS_ADMINISTRATOR view and revokes the privileges granted to the Streams administrator by the GRANT_ADMIN_PRIVILEGE procedure. Oracle Database 11g: Implement Streams I - 39

40 Configuring Communication Between Databases
Sending data or messages between sites requires network configuration at the source site. Bi-directional data replication requires network configuration at both sites. You must configure the following: Network connectivity (for example, tnsnames.ora) Database links CREATE DATABASE LINK remote_global_name CONNECT TO strmadmin IDENTIFIED BY streams USING 'connect_string_for_remote_db'; Configuring Communication Between Databases A database link is a schema object in one database that enables you to access objects on another database. The other database need not be an Oracle database system. However, to access non-Oracle systems, you must use Oracle Heterogeneous Services. To create a private database link, you must have the CREATE DATABASE LINK system privilege. To create a public database link, you must have the CREATE PUBLIC DATABASE LINK system privilege. Also, you must have the CREATE SESSION system privilege on the remote Oracle database. You create database links in the schema where the queues exist. When an application uses a database link to access a remote database, Oracle Database establishes a database session in the remote database on behalf of the local request. The CONNECT TO clause used in creating a database link determines how the remote database authenticates the login for the remote session. You can create fixed user, current user, and connected user database links. Current user links are available only through the Oracle Advanced Security option. The example in the slide shows a fixed user database link. After you create a database link, you can use it to refer to tables and views on the other database. In SQL statements, you can refer to a table or view on the other database by name> to the table or view name. You can query a table or view on the other database or use an INSERT, UPDATE, DELETE, or LOCK TABLE statement for the table. Oracle Database 11g: Implement Streams I - 40

41 Additional Preparation for File Propagation
SOURCE_DB DEST_DB External file Directory Streams BFILE Additional Preparation for File Replication You can propagate a binary file between databases by using Streams. To do so, you put one or more BFILE attributes in a message payload, and then propagate the message to a remote queue. However, the message payload cannot contain any nested BFILE attributes, either in a VARRAY or as a SYS.AnyData wrapped attribute of a user-defined type. If you are replicating files between two databases, you must perform the following configuration steps at each site: Create source and destination DIRECTORY objects Grant privileges to Streams users: READ and WRITE privileges on DIRECTORY objects The EXECUTE privilege on DBMS_FILE_TRANSFER Configure network connectivity between the databases Replicating files by using Streams has the same limitations as DBMS_FILE_TRANSFER: The size of the copied file must be a multiple of 512 bytes and must be less than or equal to 2 terabytes. Transferring the file is not transactional. The copied file is treated as a binary file, and no character set conversion is performed. File replication is typically used in Oracle Streams to implement tablespace replication. Oracle Database 11g: Implement Streams I - 41

42 Ways to Set Up Oracle Streams
Enterprise Manager Simplified MAINTAIN scripts Detailed PL/SQL API Apply Propagate Capture Ways to Set Up Oracle Streams You can set up a Streams environment with different tools that interact with each other: Enterprise Manager provides a graphical interface that enables you to perform most configuration and maintenance tasks by pointing and clicking. Your selections are then translated into the appropriate API call, which is executed in the Oracle database. This topic is primarily covered in the lesson titled “Using Enterprise Manager for Oracle Streams.” The DBMS_STREAMS_ADM package contains the MAINTAIN_TABLES, MAINTAIN_SCHEMAS, MAINTAIN_SIMPLE_TTS, MAINTAIN_TTS, and MAINTAIN_GLOBAL procedures. These are referred to as “simplified MAINTAIN scripts” because each performs multiple tasks, including setting instantiation system change numbers (SCNs), creating and synchronizing the duplicated database object(s), and creating and starting Streams processes. This topic is primarily covered in the lesson titled “Configuring Simple Streams Replication.” For maximum flexibility, you use the detailed PL/SQL API. These packages and procedures are covered in many lessons, starting with “Capture Process: Concepts.” Oracle Database 11g: Implement Streams I - 42

43 Simple Streams Configuration Table Replication
BEGIN DBMS_STREAMS_ADM.MAINTAIN_TABLES( table_names => 'OE.PROMO_TEST1', source_directory_object => 'SRC_EXP_DIR', destination_directory_object => 'DST_EXP_DIR', source_database => 'AMER.US.ORACLE.COM', destination_database => 'EURO.US.ORACLE.COM', perform_actions => TRUE, bi_directional=> TRUE, instantiation=>DBMS_STREAMS_ADM.INSTANTIATION_TABLE); END; / Simple Streams Configuration Table Replication With just one MAINTAIN_TABLES procedure call (as shown in the slide), you can configure a bi-directional table replication. As part of this configuration, several tasks are automatically performed for you: The Data Pump export utility creates a .DMP file in the OS directory to which your source directory object points. It then transfers the files to the destination database, placing them in the OS directory to which your destination directory object points. The Data Pump import utility creates the OE.PROMO_TEST1 table in the destination database. Oracle Streams “instantiates” the table—that is, it synchronizes the data between the source and destination site. Oracle Streams creates a capture process and a propagation on the AMER source site, and the related apply process on the EURO destination to transfer changes from AMER to EURO. Because bi_directional=TRUE in this example, Streams also creates a capture process and a propagation on the EURO database and an apply process on the AMER database to transfer changes from EURO to AMER. You can test this configuration by inserting a row in the OE.PROMO_TEST1 table, either on AMER or on EURO, and then see it automatically appear in the other database. Oracle Database 11g: Implement Streams I - 43

44 Using the MAINTAIN_* Procedures Example Configurations
Propagation Apply Redo transport Capture Apply Capture Redo logs Redo logs Redo logs Offload production: Downstream capture Report: Local capture Update Single source Multiple sources Example Configurations Using the MAINTAIN_* Procedures You can use the MAINTAIN_SCHEMAS, MAINTAIN_TABLES, and MAINTAIN_GLOBAL procedures to configure many different types of replications, depending on your business requirements. The examples in the slide (with light yellow background and dashed borders) are single-source configurations. They have the following characteristics: There is only one source database for shared data. Other sites can apply or propagate the messages but do not generate messages of their own for the same data structures. There may be multiple source databases in a single-source environment, but two source databases do not capture any of the same data. Multiple-source environments (with yellow background and solid border) consist of multiple databases, each of which can be the source of shared data. However, you must design and implement conflict resolution as part of this configuration. The Streams capture process captures changes from the redo logs and places them in a staging queue. This local capture requires extra database objects and resources at the source database, which can sometimes have a negative impact on database performance. Cascade Disseminate Consolidate Bi-directional Oracle Database 11g: Implement Streams I - 44

45 Oracle Database 11g: Implement Streams I - 45
Example Configurations (continued) With downstream capture, you create the capture process and the supporting objects on a database other than the source database, referred to as the downstream database. The redo logs generated at the source database can be transported to the downstream database through the Oracle database log transport service or any other supported method of transporting the redo logs, such as FTP. If you do not use the log transport service, you must add the log files to capture the process by using the ALTER DATABASE REGISTER LOGICAL LOGFILE <FILENAME> FOR <CAPTURE NAME> command. Oracle Database 11g: Implement Streams I - 45

46 Identifying Streams Processes
Queue LCR User msg . . . Destination Propagate messages Jnnn Queue LCR User msg . . . Source Queue LCR User msg . . . Enqueue LCRs Capture process amer_CP01 Enqueue LCRs Dequeue LCRs MS01 Capture changes Synchronous capture Apply process euro_AP01 Redo Log Log changes Part of transaction: Capture changes Apply changes Database objects Database objects Database objects User changes User changes Identifying Streams Processes A local capture process runs on a source database, a downstream capture process runs on a downstream database, a propagation job runs on the database containing the source queue in the propagation, and an apply process runs on a destination database. In Oracle Database 11g, the Streams processes are named in the following pattern: The names of the capture processes are CP00, CP01, …, and CPnn. The names of LogMiner reader and builder are MS00, MS01, ..., and MSnn. Propagation jobs are named Jnnn by the scheduler. Apply processes are named AP00, AP01, …, and APnn.  Apply reader and apply servers are named AS00, AS01, …, and ASnn. The nn can be 0–9, a–z, or any combinations of both, such as “ASd5” or “CP6f.” Synchronous capture is implemented with an Oracle-internal mechanism (as part of a DML transaction). Therefore, it does not have a Streams process, which you could view in the operating system. User User Oracle Database 11g: Implement Streams I - 46

47 Configuring Multiple Streams Sites
If your Streams configuration consists of multiple sites, you must diagram the components to be configured at each site. Be sure to indicate how the following components interact with other sites. Source queues and destination queues Capture and apply processes Propagation processes and message routing Configuration requirements for all Streams processes at each site Configuring Multiple Streams Sites If your Streams environment consists of multiple sites, you must map the target configuration before you start configuring individual databases. If multiple Streams components are configured on a single site, the resource requirements of all Streams components must be taken into consideration, as in the following examples: Use automatic tuning to configure STREAMS_POOL_SIZE. If you also plan to create a capture process with its own staging queue in the same database, the Streams pool must be at least 34 MB (14 MB for apply plus 20 MB for capture). Nine additional processes will be created for the database (six for apply and three for capture). To add propagation to the database requires an additional job queue process. You must also configure the global name of the database, the GLOBAL_NAMES parameter, network connectivity to the remote site, and a database link to the remote site. By making a diagram of the target configuration in advance, you can ensure that the prerequisites are met for each site, thus reducing errors when configuring individual components. Oracle Database 11g: Implement Streams I - 47

48 Hub-and-Spoke Configuration
AP01 CP02 Spoke4 CP04 Spoke3 CP03 Hub1 CP01 AP02 AP03 AP04 Hub-and-Spoke Configuration Business Example: The Product Data Management department of a train manufacturer supports engineering teams that operate at multiple locations across four continents. From an information technology perspective, this high degree of geographic separation is completely transparent with regard to information access, management, and high availability. A central repository of engineering metadata serves as the hub of the information technology architecture. Spoke databases are synchronized with the hub to provide remote engineering organizations local access to a replica of the central repository.  The train manufacturer uses Oracle Streams bi-directional replication to implement the hub-and-spoke architecture. The hub database serves three functions: It consolidates engineering data into a single global repository.  It replicates key tables at remote locations so that engineering teams have fast, reliable, local access to engineering data. It provides remote locations with continuous access to data in the event that the local spoke database becomes unavailable for any reason. In this configuration, the spoke databases can update the central source, which has conflict handlers for potential data conflicts. In other configurations, you may decide that, for the sake of data consistency, the spoke databases do not update the source. You learn more about data conflicts, how to avoid, and resolve them in the following lessons. Oracle Database 11g: Implement Streams I - 48

49 N-Way Multi-Master Replication
EMEA CP01 AP02 AP03 AMER CP02 AP01 AP03 APAC CP03 AP01 AP02 N-Way Multi-Master Replication An N-way or multi-master environment consists of multiple databases, each being the source and destination for each of the other master databases. N-way replication can have the following characteristics: Consistent data at each database Sharing updates directly with each of the other databases Automatic conflict detection and customizable conflict resolution Usually geographically distinct locations (frequently accessed via wide area network) Business example: A large software company has call centers around the world that require site autonomy—for example, in Europe (EMEA), America (AMER), and Asia-Pacific (APAC). Service is available round-the-clock. When customers call, they are connected to the nearest open call center. So each call center must have up-to-date customer records. (The workload follows the sun.) Implementation details for each database: Three queues (one for each process) One capture process (for local changes) Two apply processes (for the changes from each of the other source databases) Oracle Database 11g: Implement Streams I - 49

50 Oracle Database 11g: Implement Streams I - 50
N-Way Multi-Master Replication (continued) Example 1: A foreign country with five regional offices needs a unified view of authentication documents (such as passports, credit cards, and foreign documents), which may be presented at any of the regional offices. The authentication system consists of 110 tables. Each of the regional offices enters and updates documents, which are then replicated to the other four destinations. It is required that updates are reflected in every other database as soon as possible. Example 2: A financial services company uses three sets of Linux (2-node) RAC databases, Oracle Streams, and Data Guard. Oracle Streams provides the ability to follow the sun workload. Each of the three master databases is protected by a physical standby database (via Data Guard). Optional: If you want, you could use the preceding examples to sketch Streams configurations with the databases and main Streams processes. Oracle Database 11g: Implement Streams I - 50

51 Oracle Database 11g: Implement Streams I - 51
Summary In this lesson, you should have learned how to: Define Oracle Streams List the three basic Streams elements: capture, propagation, and apply of messages  Provide examples of Oracle Streams implementation Configure databases for Streams, including: Setting database initialization parameters Configuring memory and database storage Configuring archive and supplemental logging Creating a Streams administrator Configuring communication between your databases Replicate a table by using the MAINTAIN_* procedure Oracle Database 11g: Implement Streams I - 51

52 Practice 1 Overview: Viewing the Database Configurations
In this practice, you confirm that the basic setup tasks have been performed and that the AMER and the EURO database are configured as Streams source and destination databases. 1. Gather configuration information. 2. Verify the network configuration. 3. Verify the source database configuration for AMER. 4. Verify the destination database configuration for EURO. 5. Create a test table and enable supplemental logging. 6. Use the MAINTAIN_TABLES procedure to replicate the table. 7. Test your configuration: Insert a row on AMER and view it on EURO. 8. Insert a row on EURO and view it on AMER. Practice 1 Overview: Viewing the Database Configurations In the classroom setup, you complete the basic DBA tasks that you must perform before you can begin configuring Oracle Streams. Installing the software, creating databases, and configuring Oracle Net are covered in detail in the prerequisite courses. To confirm network connectivity so that the databases can communicate with each other, you can use the netmgr utility, or check the tnsnames.ora file for relevant entries and ping the databases. The database configuration for both the source and destination includes the configuration or creation of: Database parameters Sizing of memory and pools ARCHIVELOG mode Streams administrator and tablespace Database links Directory objects Oracle Database 11g: Implement Streams I - 52

53 Result of Practice 1: Bi-directional Table Replication
AMER database EURO database AMER$CAPQ AMER$APPQ AMER$CAP PROPAGATION$6 APPLY$AMER_2 OE. PROMO_ TEST1 OE. PROMO_ TEST1 APPLY_AMER_2 PROPAGATION$5 EURO$CAP EURO$APPQ EURO$CAPQ Oracle Database 11g: Implement Streams I - 53

54 Using Enterprise Manager for Oracle Streams

55 Oracle Database 11g: Implement Streams I - 55
Objectives After completing this lesson, you should be able to: Use wizards to configure Oracle Streams for data replication Manage Oracle Streams processes and components, such as: Capture process Rules Propagation Apply process and apply process errors Queues, queue subscribers, and queue tables Queue messages and message transformations Objectives You can configure and manage your Oracle Streams environment by using the Enterprise Manager (EM) interface. The following are some of the tasks you can perform using the graphical interface: Use a Streams setup wizard to configure your Oracle Streams environment. Monitor the current state of capture, apply, and propagation for a database. Alter the configuration of capture, apply, and propagation for a database. Create Oracle Streams Advanced Queuing (AQ) queues and queue tables. Perform queue administration. Oracle Database 11g: Implement Streams I - 55

56 Configuring Streams with EM Setup Wizard
Enterprise Manager > Data Movement > Setup (in the Streams section) Configuring Streams with EM Setup Wizard The Streams tool in Enterprise Manager (EM) enables you to configure, manage, and monitor a Streams environment by using a Web browser. The Streams Setup page is the access point for the following wizards: Streams Global, Schema, Table, and Subset Replication Wizard: Using this wizard, you can set up and replicate the whole database, specific schemas, or specific tables between two databases. Streams Tablespace Replication Wizard: This wizard configures the replication and maintenance of tablespaces between databases. Messaging: Using this option, you can create and set up queues. Oracle Database 11g: Implement Streams I - 56

57 Global, Schema, Table, and Subset Replication Wizard
Streams Global, Schema, Table and Subset Replication Wizard This wizard can configure a replication environment that replicates changes to the entire source database, certain schemas in the source database, certain tables in the source database, or certain rows in the source database. The objects configured for replication by this wizard may be in multiple tablespaces in your source database. This wizard can configure and start a single-source Streams replication environment with capture, propagation, and apply processes. Sample summary of the wizard’s operations: 1. Setup operations Set up table replication. Capture, propagate, and apply data manipulation language (DML) changes. Copy existing data to the destination database as part of setup. 2. Export/Import operations Export all objects selected from the source database. Import them to the destination database. 3. Startup operations Start the apply process first at the destination database. Start the capture process at the source database. Oracle Database 11g: Implement Streams I - 57

58 Global, Schema, Table, and Subset Replication Wizard
Customizing your database objects on the Configure Replication page On the Object Selection page Streams Global, Schema, Table and Subset Replication Wizard (continued) For example, to replicate all the rows in the HR.JOBS table, select or enter HR.JOBS in the Table field, and leave the Subset Condition field blank for this table. To replicate a subset of the rows in a table, specify the subset condition in the Subset Condition field for the table. When a subset condition is specified for a table, subset rules are created and only the rows in the table that satisfy the subset condition are replicated. Table rules and subset rules for the selected tables are added to the positive rule set of each Streams client. For example, to replicate the rows only for department 1700 in the HR.DEPARTMENTS table, specify this table in the Table field, and enter the following condition in the Subset Condition field: DEPARTMENT_ID=1700. Oracle Database 11g: Implement Streams I - 58

59 Streams Tablespace Replication Wizard
Configuring Streams Cloning tablespaces Starting apply Starting capture Streams Tablespace Replication Wizard This wizard can configure a single-source Streams replication environment that replicates changes to all database objects in a particular self-contained tablespace or a set of self-contained tablespaces. A self-contained tablespace has no references from the tablespace pointing outside of the tablespace. For example, if an index in the tablespace is for a table in a different tablespace, the tablespace is not self-contained. When there are multiple tablespaces, a self-contained tablespace set has no references from inside the set of tablespaces pointing outside of the set of tablespaces. This wizard configures a Streams replication environment that uses a capture process to capture changes at a source database, a propagation to propagate the changes in the form of logical change records (LCRs) to the destination database, and an apply process at the destination database to apply the changes. Sample summary of the wizard’s operations: 1. Setting up Streams between the source database and the destination database 2. Cloning the STREAMS_REP tablespace from the source database at the destination database 3. Starting the apply process first at the destination database 4. Starting the capture process at the source database Oracle Database 11g: Implement Streams I - 59

60 Oracle Database 11g: Implement Streams I - 60
Messaging Setup a349 Messaging Setup The Messaging option opens the Messaging subtab. Using this page, you can configure queues, queue tables, and propagations in a messaging environment. Users and applications can enqueue messages into a queue, propagate messages to subscribing queues, notify user applications that messages are ready for consumption, and dequeue messages at the destination. A queue may be configured to stage messages only of a particular type, or a queue may be configured as an ANYDATA queue. Messages of any type can be wrapped in an ANYDATA wrapper and staged in ANYDATA queues. Streams messaging supports all the standard features of message queuing systems, including multiple-consumer queues, publish and subscribe, content-based routing, Internet propagation, transformations, and gateways to other messaging subsystems. Oracle Database 11g: Implement Streams I - 60

61 Oracle Database 11g: Implement Streams I - 61
Managing Streams Streams Overview page Managing the capture process Managing rules Managing propagation Managing the apply process Managing queues, queue subscribers, and queue tables Managing queue messages and message transformations Managing Streams You can perform all major Streams management tasks in Enterprise Manager. Oracle Database 11g: Implement Streams I - 61

62 Oracle Database 11g: Implement Streams I - 62
Streams Overview Page Streams Overview Page After you have configured your Oracle Streams environment, you can monitor all Oracle Streams components for a database by using the Streams Overview tab. Oracle Database 11g: Implement Streams I - 62

63 Managing the Capture Process
When you click the Capture tab under Streams Management, the Capture administration page appears. On this page, you can: View all capture processes for a database. View their associated rule sets and the rules within. Determine which queue each capture process is associated with. View the current state and status of each capture process. View FIRST_SCN and APPLIED_SCN for each capture process. Determine the type of capture process (local or downstream). View any errors generated by a capture process. Manage the rules for a capture process by clicking the appropriate rule set name link. Edit the capture process parameters or the FIRST_SCN setting. View the statistics for a capture process. Start, stop, or delete a capture process. Oracle Database 11g: Implement Streams I - 63

64 Capture Process Details
When you select a capture process and click the View button (or click the capture process name link), the View Capture Details page appears. This page contains detailed information about the configuration of the capture process. You can also use the Search field on this page to query the objects for which this capture process has rules. You can search for object, schema, or global rules, as well as positive, negative, and data definition language (DDL) rules. Oracle Database 11g: Implement Streams I - 64

65 Oracle Database 11g: Implement Streams I - 65
Managing Rules . . . Managing Rules When you click the hyperlinked rule set name for a capture process, the Manage Rule Set page appears. Using this page, you can perform the following rule set actions: Create Streams rules (global, schema, table, or subset), and optionally add a user-defined condition to the new rule. Add a rule to the rule set. Edit the rule condition for a rule in the rule set. Specify a rule-based transformation for a rule in the rule set. Remove a rule from the rule set. Delete a rule from the database. The Edit Rule page enables you to edit a rule. You can change the rule condition on this page. However, you should be aware that editing a rule condition modifies the behavior of all rules engine clients that evaluate the rule. In the Associated Transformation field, you can enter the name of a PL/SQL function to be used as a rule-based transformation for the rule, in the form function_owner.function_name. To change the function, modify the function name or use the search control to modify it. If you have not yet created the PL/SQL function, you can click the Create PL/SQL Functions link to open the Functions page. Oracle Database 11g: Implement Streams I - 65

66 Oracle Database 11g: Implement Streams I - 66
Managing Propagation Managing Propagation When you click the Propagation tab on the Streams Overview page, the Propagation administration page appears. On this page, you can: View all propagations for a database View the rule sets used by each propagation. Determine the source and destination queues for each propagation. View the current status of each propagation. Check for propagation failures or errors. Manage the rules for a propagation by clicking the appropriate rule set name link View the statistics for a propagation Configure a new propagation or delete an existing propagation Oracle Database 11g: Implement Streams I - 66

67 Oracle Database 11g: Implement Streams I - 67
Managing Propagation Managing Propagation (continued) The Setup Propagation page enables you to set up a propagation. Using the Setup Propagation page, you can create a propagation by performing one or more of the following steps: Enter a name for the propagation in the Propagation Name field. Specify an existing queue as the source queue by entering it in the Source Queue field, or by clicking the flashlight icon and selecting the queue from the Queue Search page. Specify an existing destination database link by entering it in the Destination Database Link field, or by clicking the flashlight icon and selecting the destination database link from the Search and Select: Database Link page. Create a database link for the destination by clicking the Create Database Link button. Specify an existing queue at the destination database as the destination queue by entering it in the Destination Queue field. Specify a positive rule set for the propagation by entering its name in the Positive Rule Set field, or by clicking the flashlight icon and selecting a positive rule set from the Search and Select: Rule Set page. Specify a negative rule set by entering its name in the Negative Rule Set field, or by clicking the flashlight icon and selecting the negative rule set from the Search and Select: Rule Set page. Set up the propagation by clicking OK. Cancel the operation and return to the Messaging page by clicking the Cancel button. Oracle Database 11g: Implement Streams I - 67

68 Managing the Apply Process
When you click the Apply tab under Streams Management, the Apply administration page appears. On this page, you can: View all apply processes for a database View the rule sets used by each apply process. Determine which queue an apply process is associated with. Determine the source database for each apply process. View the current state and status of each apply process. Determine whether the apply process processes captured or user-enqueued messages. Determine whether there are any handlers associated with the apply process. View the apply tag used by each apply process. Manage the rules for an apply process by clicking the appropriate rule set name link Edit the apply process parameters View the statistics and errors for an apply process Start, stop, or delete an apply process To view configuration details, select an apply process and click the View button (or click the apply process name link). Oracle Database 11g: Implement Streams I - 68

69 Oracle Database 11g: Implement Streams I - 69
Managing Queues Managing Queues On the Streams Management page, click the Messaging tab to open the Queue Messaging administration page. The Messaging page contains an overview of all queues in a database. To display particular queues, specify the schema or queue name, or both, and click Go. To display all queues, leave the search fields blank and click Go. Using the Streams: Messaging page, you can: Go to the Create Queue page by clicking Create Change some of the parameters of a queue by selecting it in the Select column, and then clicking Edit. You can also edit queue parameters by clicking the queue name in the Queue Name column. Delete a queue by selecting it in the Select column, and then clicking Delete Start a queue by selecting it in the Select column, selecting Start Queue from the Action drop-down menu, and then clicking Go Stop a queue by selecting it in the Select column, selecting Stop Queue from the Action drop-down menu, and then clicking Go Add, delete, or modify subscribers to a queue by selecting it in the Select column, selecting Subscribers from the Action drop-down menu, and then clicking Go Oracle Database 11g: Implement Streams I - 69

70 Oracle Database 11g: Implement Streams I - 70
Managing Queues (continued) Modify or delete propagation schedules for a queue by selecting it in the Select column, selecting Propagation Schedules from the Action drop-down menu, and then clicking Go. This option takes you to the Queue_Name: Propagation Schedules page, where you can also set up propagation for a queue by clicking Create. View statistics for a queue by selecting it in the Select column, selecting Queue Statistics from the Action drop-down menu, and then clicking Go View information about messages in a queue by selecting it in the Select column, selecting Messages from the Action drop-down menu, and then clicking Go Purge messages from a queue by selecting it in the Select column, selecting Purge from the Action drop-down menu, and then clicking Go Oracle Database 11g: Implement Streams I - 70

71 Managing Queue Subscribers
EM_queue_subscribers.gif Managing Queue Subscribers The Subscribers page displays information about current subscribers to a queue. The contents of the page depend on whether or not the queue is persistent: If the queue is persistent, using the Subscribers page, you can: Restrict the subscribers displayed by entering a subscriber name in the Name field under Search and clicking Go Restrict the subscribers displayed by entering a subscriber address in the Address field under Search and clicking Go Restrict the subscribers displayed by entering a database link in the Database Link field under Search, or by clicking the flashlight icon and browsing the Search and Select: Database Link page and clicking Go Go to the Create Subscriber page by clicking Create Edit subscriber parameters by selecting the subscriber under the Select column, and then clicking Edit Delete a subscriber by selecting the subscriber under the Select column, and then clicking Delete Oracle Database 11g: Implement Streams I - 71

72 Oracle Database 11g: Implement Streams I - 72
Managing Queue Subscribers (continued) If the queue is nonpersistent, using the Subscribers page, you can: Restrict the subscribers displayed by entering a subscriber name in the Name field under Search and clicking Go Create a new subscriber by clicking Add Another Row, entering the subscriber name in the Name column of the table, and clicking OK. Nonpersistent queue subscribers must be local. Delete a subscriber by clicking the trash icon under Delete. After confirming your subscriber deletion on the Confirmation page, you return to the Subscribers page. Cancel the Add Another Row operation by clicking Revert. If you have already clicked OK after creating a new subscriber or deleted an existing subscriber, clicking Revert does not undo the change. Oracle Database 11g: Implement Streams I - 72

73 Oracle Database 11g: Implement Streams I - 73
Managing Queue Tables Managing Queue Tables You find a link for Queue Tables on the lower portion of the Messaging page. The Queue Tables page contains an overview of all queue tables in a database. To display particular queue tables, specify the schema in the Schema field or the queue table name in the Queue Table Name field, or both, and click Go. To display all queue tables, leave the search fields blank, and click Go. Using the Queue Tables page, you can: Go to the Create Queue Table page by clicking the Create button Delete a queue table by selecting it in the Select column, and then clicking the Delete button Edit the parameters of a queue table by selecting it in the Select column, and then clicking the Edit button. You can also edit queue table parameters by clicking the queue table name in the Queue Table column of the table. Retrieve information about messages in a queue table by selecting it in the Select column, and then clicking the Messages button Oracle Database 11g: Implement Streams I - 73

74 Managing Queue Messages
The Messages page displays information about the messages in a queue. Using the Messages page, you can limit the displayed messages: To a particular state (Ready, Processed, Undeliverable, Waiting, Expired, Spilled, In Memory, Deferred, Deferred Spilled, or Buffered Expired) By priority To a particular enqueue time interval To a particular dequeue time interval To a particular consumer Note: This page may not be populated in some Streams versions. Oracle Database 11g: Implement Streams I - 74

75 Purging Queue Messages
The Purge page enables you to purge some or all messages in a queue. Using the Purge page, you can: Isolate the queue during the purge operation by selecting the “Block All Concurrent Enqueuers and Dequeuers While the Queue is Being Purged” check box Purge all messages by selecting the Purge All Messages option Purge selected messages by selecting the “Purge Messages Based On Condition Specified” option Choose simple purge conditions by selecting the Simple option: Purge only Ready, Processed, Undeliverable, Wait, Expired, Spilled, In Memory, Deferred, Deferred Spilled, or Buffered Expired messages. Purge messages by priority. Purge messages from a particular enqueue time interval. Purge messages from a particular dequeue time interval. Specify your own purge condition in the format of a SQL WHERE clause by selecting the Advanced option and entering the condition in the Advanced field View the SQL statement corresponding to your current choices by clicking the Show SQL button Oracle Database 11g: Implement Streams I - 75

76 Managing Message Transformations
The Transformations page displays information about transformations, which are SQL functions that transform one Oracle object type to another. You can access the Transformations page by clicking the Transformations link in the Related Links section on the Streams > Messaging page. To display particular transformations, specify the schema in the Schema field or the transformation name in the Transformation Name field, or both, and click Go. To display all transformations, leave the search fields blank and click Go. Using the Transformations page, you can: Go to the Create Transformation page by clicking the Create button Make changes to an existing transformation by selecting it under the Select column, and then clicking Edit. You can also edit an existing transformation by clicking its name under Transformation. Delete a transformation by selecting it under the Select column, and then clicking the Delete button For more information about message transformations, refer to the section titled “Creating a Transformation” in the Oracle Streams Advanced Queuing User’s Guide and Reference. Oracle Database 11g: Implement Streams I - 76

77 Oracle Database 11g: Implement Streams I - 77
Summary In this lesson, you should have learned how to: Use wizards to configure Oracle Streams for data replication Manage Oracle Streams processes and components, such as: Capture process Rules Propagation Apply process and apply process errors Queues, queue subscribers, and queue tables Queue messages and message transformations Oracle Database 11g: Implement Streams I - 77

78 Practice 2 Overview: Using EM for Tablespace Replication
This practice covers the following topics: Using EM to view your previously completed table replication Using EM to configure tablespace replication Oracle Database 11g: Implement Streams I - 78

79 Result of Practice 2: One-Directional Tablespace Replication
AMER database EURO database STREAMS_ CAPTURE STREAMS_ PROPAGATION STREAMS_ APPLY HR.SALES in STREAMS_REP tablespace STREAMS _CAPTURE _QUEUE STREAMS _APPLY _QUEUE HR.SALES in STREAMS_REP tablespace Oracle Database 11g: Implement Streams I - 79

80 Configuring Simple Streams Replication

81 Oracle Database 11g: Implement Streams I - 81
Objectives After completing this lesson, you should be able to: Provide an overview of the Streams configuration procedures Configure replication of a tablespace between two databases Configure replication of a table, schema, or entire database with a single procedural call Drop a queue table Remove the Oracle Streams configuration from a database Oracle Database 11g: Implement Streams I - 81

82 Configuration Procedures: Overview
Single-step procedures for configuration and management: MAINTAIN_TABLES MAINTAIN_SCHEMAS MAINTAIN_SIMPLE_TTS MAINTAIN_TTS MAINTAIN_GLOBAL Each procedure: Is in the DBMS_STREAMS_ADM package Can either execute immediately or generate a script Sets up an entire Streams environment, including: Setting instantiation SCNs Creating the Streams processes Starting the Streams processes Configuration Procedures: Overview You can use single-step procedures for configuring and maintaining changes to a given set of tables, a list of schemas, or an entire database. Each procedure sets up an entire Oracle Streams environment, including setting instantiation system change numbers (SCNs), creating, and starting the Streams processes. Each procedure can either execute all commands in one step or generate a script that can be customized and executed at a later time. The following procedures in the DBMS_STREAMS_ADM package provide an easy method of configuring a replication environment that is maintained by Oracle Streams: MAINTAIN_TABLES configures an Oracle Streams environment that replicates changes to specified tables between two databases. MAINTAIN_SCHEMAS configures an Oracle Streams environment that replicates changes to specified schemas between two databases. MAINTAIN_SIMPLE_TTS configures an Oracle Streams environment that replicates changes to a single, self-contained tablespace between two databases. MAINTAIN_TTS configures an Oracle Streams environment that replicates changes to a self-contained set of tablespaces. MAINTAIN_GLOBAL configures an Oracle Streams environment that replicates changes at the database level between two databases. Oracle Database 11g: Implement Streams I - 82

83 Oracle Database 11g: Implement Streams I - 83
Configuration Procedures: Overview (continued) Procedure Requirements You run these procedures at the capture database. You must enable ARCHIVE LOGGING in the source database. You must run these procedures as a Streams administrator that has been granted the DBA role. This user must also have READ and WRITE privileges on each DIRECTORY object used during the configuration process. Both databases must be open during configuration. If the procedure is generating only a script, the database specified in the destination_database parameter does not need to be open when you run the procedure, but both databases must be open when you run the generated script. Database link access to the other database must be available for: The Streams administrator at the destination database, if bi_directional is set to TRUE or if a network instantiation is being performed The capture user at the downstream database, if configuring downstream capture The Streams administrator at the source database if configuring downstream capture Usage Guidance The MAINTAIN_SIMPLE_TTS and MAINTAIN_TTS procedures must be used instead of the MAINTAIN_SIMPLE_TABLESPACE and MAINTAIN_TABLESPACES procedures. The MAINTAIN_* procedures automatically exclude database objects that are not supported by Oracle Streams by creating appropriate rule sets. All procedures listed in this lesson can be used to configure a downstream capture replication environment and a bi-directional replication environment. Allow Streams to name the queues and processes whenever possible. Oracle Database 11g: Implement Streams I - 83

84 Configuration Decisions
In preparing to run the simplified configuration procedures, you need to make the following decisions: Whether to maintain DDL changes or not Which type of replication configuration to use: One-way or bi-directional Local or downstream capture  Hub-and-spoke or n-way replication How to perform the instantiation (that is, data synchronization between source and destination) Whether to configure replication directly or generate a script Configuration Decisions The answers to the questions listed in the slide help determine the proper values for various arguments in the procedures and the preconfiguration steps that you must perform. Capturing DDL By default, only data manipulation language (DML) changes are captured for replicated objects. You must explicitly instruct Streams to also capture and apply data definition language (DDL) changes. If you want to capture both DML and DDL changes for a tablespace, the MAINTAIN_SIMPLE_TTS procedure does not configure DDL rules, but the MAINTAIN_TTS procedure can be instructed to configure DDL rules. Type of Replication Oracle Streams is very flexible with regard to configuring the replication of data. You can configure replication by using these procedures in the following ways: One-way: A one-way replication environment is the same as a single-source environment. Bi-directional: Changes are captured at each site and sent to the other site. This is referred to as bi-directional replication. Local capture: Changes are captured locally at the source database. Oracle Database 11g: Implement Streams I - 84

85 Oracle Database 11g: Implement Streams I - 85
Configuration Decisions (continued) Downstream: With downstream capture, the related database objects (such as the Streams queue, the capture process, and the capture process rules) are created on a different database than the source database. This alternate database is referred to as the downstream database. Downstream capture is covered in more detail later in this course. Hub-and-spoke replication: This is a configuration of a central database, or hub, that communicates with one or more secondary databases, or spokes. The spokes do not communicate directly with each other. The spokes may or may not allow changes to the replicated database objects (that is, the replication can be one-way or bi-directional). N-way replication: This configuration is implemented when each database communicates directly with each of the other databases (in a peer environment consisting of multiple databases). Performing Instantiation Instantiation is a process that synchronizes data between the source site and the destination site. There are different ways in which data can be instantiated. The topic titled “Instantiation Options” discusses the different methods of instantiation that can be used with the simplified replication configuration procedures. Script Generation When using the simplified replication configuration procedures, you can either have the procedures perform all actions immediately, or you can choose to have the procedures write all actions to be performed in a script. You can then edit this script and run it at a later time. Oracle Database 11g: Implement Streams I - 85

86 Oracle Database 11g: Implement Streams I - 86
Prerequisite Steps What you did already Configuring a Streams administrator at each site Creating a directory object to store Data Pump dump files and the tablespace data files at each site, if needed Creating database links at each site, as needed Prerequisite Steps After you have determined the type of environment you want to configure, perform the prerequisite steps listed in the slide at each database that participates in the Oracle Streams environment. For example, if you are configuring one-way replication from SITE1 to SITE2, you must create a database link that connects SITE1 to SITE2, but you do not have to create a database link that connects SITE2 back to SITE1. However, it is recommended that you create this database link (from the destination database to the source database) because it simplifies the setup. If you are configuring downstream capture, you must perform additional configuration steps to enable the transfer of the redo logs to the downstream database. There must be a directory object at each site that points to the location of the data files. Oracle Database 11g: Implement Streams I - 86

87 Instantiation Options
Instantiation is a three-part process: Preparing database objects for instantiation at a source database Optionally copying the database objects from a source database to a destination database Setting the instantiation system change number (SCN) for each instantiated database object With MAINTAIN_GLOBAL, MAINTAIN_SCHEMAS, and MAINTAIN_TABLES procedures in the DBMS_STREAMS_ADM package: Data Pump export dump file instantiation INSTANTIATION_FULL INSTANTIATION_SCHEMA INSTANTIATION_TABLE Data Pump network import instantiation INSTANTIATION_FULL_NETWORK INSTANTIATION_SCHEMA_NETWORK INSTANTIATION_TABLE_NETWORK Other instantiation methods: INSTANTIATION_NONE Instantiation Options When you specify one of the instantiation options listed in the slide with the MAINTAIN_GLOBAL, MAINTAIN_SCHEMAS, and MAINTAIN_TABLES procedures, the instantiation of the replicated objects is performed as part of the configuration of the Streams replication environment. You do not have to perform any action other than to specify the type of instantiation to be performed. When specifying a type of instantiation, the level of instantiation must match the level of the MAINTAIN_* procedure that is being used. For example, if you are executing the MAINTAIN_SCHEMAS procedure, you can choose only one of the following instantiation options: DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA_NETWORK DBMS_STREAMS_ADM.INSTANTIATION_NONE You cannot choose a table-level or global-level instantiation method when configuring replication at the schema level. If you use the MAINTAIN_GLOBAL, MAINTAIN_SCHEMAS, and MAINTAIN_TABLES procedures and specify the DBMS_STREAMS_ADM.INSTANTIATION_NONE instantiation option, you must perform the necessary instantiation actions manually. When performing instantiation manually, you can use any instantiation method. The different methods of manually instantiating replicated objects are covered in the lesson titled “Instantiation.” Oracle Database 11g: Implement Streams I - 87

88 Instantiation Options
The MAINTAIN_SIMPLE_TTS and MAINTAIN_TTS procedures: Do not use the instantiation options listed in the previous slide. Use procedures in the DBMS_STREAMS_ADM package 1. Creating a transportable tablespace set 2. Transferring files to the destination database 3. Importing the transportable tablespace set at the destination database Instantiation Options (continued) The MAINTAIN_SIMPLE_TTS and MAINTAIN_TTS procedures do not use the instantiation options listed in the slide; they use procedures in the DBMS_STREAMS_ADM package to instantiate the replicated objects. These procedures: 1. Create a transportable tablespace set 2. Transfer the files that make up the transportable tablespace set to the destination database 3. Import the transportable tablespace set at the destination database Oracle Database 11g: Implement Streams I - 88

89 Replicating a Single Tablespace
Microsoft Windows NT Each database captures changes, propagates, and applies from the other database. Linux IA (32-bit) APP_DATA tablespace EXEC DBMS_STREAMS_ADM.MAINTAIN_SIMPLE_TTS(- 'APP_DATA','SOURCE_DIRECTORY','DEST_DIRECTORY',- source_database => 'SITE1.NET', - destination_database=>'SITE2.NET', - bi_directional=> TRUE); Replicating a Single Tablespace With a single procedural call, you can not only clone a tablespace or a set of tablespaces on a source database and copy them to a destination database, but also choose to automatically configure data replication between the two sites for all objects in the tablespace. You can use the MAINTAIN_SIMPLE_TTS procedure to configure Oracle Streams replication for a simple tablespace, or the MAINTAIN_TTS procedure to configure Oracle Streams replication for a set of self-contained tablespaces. Both these procedures are in the DBMS_STREAMS_ADM package. These procedures set up either a single-source Streams environment or a bi-directional Streams environment: For a single-source environment, the source site can be either the local database or a downstream database, as specified by the source_database parameter. When these procedures configure downstream capture, they always configure archived-log downstream capture and not real-time downstream capture. However, the scripts generated by these procedures can be modified to configure real-time downstream capture. Oracle Database 11g: Implement Streams I - 89

90 Oracle Database 11g: Implement Streams I - 90
Replicating a Single Tablespace (continued) The bi_directional parameter for each procedure controls whether the Oracle Streams configuration is single-source or bi-directional. If bi_directional is FALSE, a capture process at the local database captures DML changes to the tables in the specified tablespace or tablespace set, a propagation propagates these changes to the destination database, and an apply process at the destination database applies these changes. If bi_directional is TRUE, each database captures changes and propagates them to the other database, and each database applies changes from the other database. The destination database is always a local capture site in this case. These procedures cannot be used to configure multidirectional replication where changes may be cycled back to a source database by a third database in the environment. Usage Notes The MAINTAIN_SIMPLE_TTS and MAINTAIN_TTS procedures can perform the configuration actions online or generate a script to perform the actions at a later time, depending on how the procedures are called. If generating a script, you must create a directory object at the source site for the location in which to place the generated script. The MAINTAIN_SIMPLE_TTS and MAINTAIN_TTS procedures automatically exclude database objects that are not supported by Oracle Streams in the tablespace from the replication environment by creating negative rules for those objects for capture and apply. If configuring bi-directional replication, configure conflict resolution before you allow users to make changes to the objects in the tablespace set. You can use these procedures to clone tablespaces between databases on different operating systems, as long as cross-platform transportable tablespaces are supported between the two different operating systems and the database character sets match. To determine which platforms are supported, query the V$TRANSPORTABLE_PLATFORM dictionary view. However, if the Recovery Manager (RMAN) CONVERT DATABASE command is used during the instantiation process for one of these procedures, the destination database cannot be the capture database. Refer to the PL/SQL Packages and Types Reference manual for more details about using these procedures. Oracle Database 11g: Implement Streams I - 90

91 Replicating a Set of Tablespaces
Solaris™ OE (32-bit) HP-UX (64-bit) DBMS_STREAMS_ADM.MAINTAIN_TTS( ts_set,'SOURCE_DIRECTORY','DEST_DIRECTORY', destination_database=>'SITE2.NET', capture_name => 'TBSP_CAPTURE', bi_directional=> FALSE); Replicating a Set of Tablespaces The MAINTAIN_TTS procedure of DBMS_STREAMS_ADM works similar to the MAINTAIN_SIMPLE_TTS procedure, but enables you to clone and configure Oracle Streams replication for multiple tablespaces in a single procedural call. Instead of supplying the name of a single tablespace, you supply the name of a tablespace set, which is of the DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET type. TYPE TABLESPACE_SET IS TABLE OF VARCHAR2(32) INDEX BY BINARY_INTEGER; For example, assume that you want to configure one-way replication from the SITE1 database to the SITE2 database for two tablespaces, APP_DATA_TS and APP_INDX_TS. You would need to first prepare the source database (SITE1.NET) and the destination database (SITE2.NET) as follows: 1. Create a Streams administrator with proper privileges at each site. 2. Create a database link named SITE2.NET from the SITE1.NET database to the SITE2.NET database. 3. Create a DIRECTORY object named TBSP_DIRECTORY at each site by using an acceptable operating system path name. Oracle Database 11g: Implement Streams I - 91

92 Oracle Database 11g: Implement Streams I - 92
Replicating a Set of Tablespaces (continued) Now, use the following example to clone the APP_DATA_TS and APP_INDX_TS tablespaces and configure Oracle Streams to replicate data from the source database to the destination database. Run the following PL/SQL code at the SITE1.NET database while logged in as the Streams administrator: DECLARE ts_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN ts_set(1) := 'APP_DATA_TS'; ts_set(2) := 'APP_INDX_TS'; DBMS_STREAMS_TABLESPACE_ADM.MAINTAIN_TTS( tablespace_names => ts_set, source_directory_object => 'TBSP_DIRECTORY', destination_directory_object => 'TBSP_DIRECTORY', destination_database => 'SITE2.NET', bi_directional => FALSE); END; / Usage Notes The specified set of tablespaces must be self-contained—that is, there must be no references from objects inside the set of tablespaces to objects in a tablespace that is not part of the tablespace set. For example, if a partitioned table is partially contained in the set of tablespaces, the tablespace set is not self-contained. To determine whether a set of tablespaces is self-contained, use the TRANSPORT_SET_CHECK procedure in the Oracle-supplied DBMS_TTS package. The value specified for the destination_database parameter must be the same as the name of the database link created during preconfiguration. Oracle Database 11g: Implement Streams I - 92

93 Replicating an Entire Database
Prerequisites: “Skeleton” destination database Capture process created, but not enabled DBMS_STREAMS_ADM.MAINTAIN_GLOBAL( source_directory_object => 'SOURCE_DIR', destination_directory_object => 'DEST_DIR', source_database => 'SITE1.NET', destination_database=>'SITE2.NET', perform_actions => TRUE, bi_directional=> TRUE, instantiation => DBMS_STREAMS_ADM.INSTANTIATION_FULL); Replicating an Entire Database You can use the MAINTAIN_GLOBAL procedure to configure an Oracle Streams environment that replicates changes at the database level between two databases. This procedure can either configure the environment directly or generate a script that configures the environment. As a starting point, you must have a skeleton database (which you could create with the dbca utility) and a Capture process created, but not enabled. Note: This configures a hub-and-spoke replication topology. These procedures cannot be used for n-way configurations. To use this procedure, you specify the following information: The DIRECTORY object for the directory on the source site into which the generated Data Pump export dump file is placed and the name of the DIRECTORY object for the directory on the destination site into which the generated Data Pump export dump file is transferred The global name of the source database. If the specified global name is the same as the global name of the local database, the procedure configures a local capture process for the source database. If the specified global name is different from the global name of the local database, the procedure configures a downstream capture process at the local database. In this case, a database link from the local database to the source database with the same name as the global name of the source database must exist and must be accessible to the user who runs the procedure. Oracle Database 11g: Implement Streams I - 93

94 Oracle Database 11g: Implement Streams I - 94
Replicating an Entire Database (continued) The global name of the destination database. A database link from the source database to the destination database with the same name as the global name of the destination database must exist and must be accessible to the user who runs the procedure. Whether or not the MAINTAIN_GLOBAL procedure must perform the necessary actions to configure the replication environment directly or place the commands in a script. If the perform_actions parameter is FALSE, you must specify the name of the script generated by this procedure and a directory object for the directory on the local computer system into which the generated script is placed. Whether or not you want to configure bi-directional replication between the source database and the destination database. By default, one-way replication from the current database to the database specified in destination_database is configured. Whether to perform instantiation and, if instantiation is performed, the type of instantiation. By default, a full instantiation is performed for this procedure. Additionally, there are many optional parameters available for this procedure call.  Usage Notes You must not allow DML or DDL changes to the destination database while the MAINTAIN_GLOBAL procedure, or the script generated by the procedure, is running. A capture process never captures changes in the SYS, SYSTEM, or CTXSYS schemas. This procedure does not configure replication for these schemas. Oracle Database 11g: Implement Streams I - 94

95 Oracle Database 11g: Implement Streams I - 95
Replicating Schemas DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS( schema_names => 'HR', source_database => 'SITE3.NET', source_directory_object => 'SOURCE_DIR', destination_directory_object => 'DEST_DIR', destination_database => 'SITE2.NET', perform_actions => FALSE, script_directory_object => 'SCRIPT_DIR', script_name => 'config_HR_rep.sql', dump_file_name => 'HR_exp.dmp', capture_queue_table => 'CAPTURE_QT', capture_queue_name => 'CAPTURE_QUEUE', capture_queue_user => 'HRAPP_USER', apply_queue_table => 'strmadmin.apply_hr_qt', apply_queue_name => 'strmadmin.apply_queue', instantiation=>DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA); Replicating Schemas You can use the MAINTAIN_SCHEMAS procedure to configure an Oracle Streams environment that replicates between two databases the changes made to objects in one or more schemas. You supply the same information as for MAINTAIN_GLOBAL, except that you also provide either an array of schema names or a column-separated list of schema names. There are two forms of this procedure: In one, the schema_names parameter is of the VARCHAR2 type. In the other, the schema_name parameter is of the DBMS_UTILITY.UNCL_ARRAY type. If the command shown in the slide is executed on the SITE1.NET database, the environment is configured in the following manner: The MAINTAIN_SCHEMAS procedure does not configure the replication environment directly. Instead, a configuration script named config_HR_rep.sql is generated in the directory pointed to by the SCRIPT_DIR DIRECTORY object. Because the source database (SITE3.NET) is not the same as the local database (SITE1.NET), the script contains commands to create a downstream capture process on the local database with SITE3.NET as its source. Oracle Database 11g: Implement Streams I - 95

96 Oracle Database 11g: Implement Streams I - 96
Replicating Schemas (continued) The queue named CAPTURE_QUEUE that uses a queue table named CAPTURE_QT is configured at the downstream capture site, if this queue and queue table do not exist. The apply process at the destination database (SITE2.NET) is configured for the queue named APPLY_QUEUE that uses a queue table named APPLY_HR_QT. The script contains commands to perform the following additional actions: Configure supplemental logging for the shared database objects at the source database only (SITE3.NET). At the downstream capture site (SITE1.NET), the commands create a capture process named CAPTURE for capturing DML changes to the HR schema and all tables in this schema. The capture process is configured to enqueue changes in the CAPTURE_QUEUE queue on SITE1.NET. Configure a propagation to propagate the captured changes from the CAPTURE_QUEUE queue in the downstream database to the APPLY_QUEUE queue in the destination database. Configure an apply process at the destination database to dequeue the changes from the APPLY_QUEUE queue and apply them to the objects in the HR schema. Instantiate the schema and all objects in the schema with a schema-level Data Pump export at the source database and a Data Pump import of the export dump file at the destination database. The instantiation SCN is set for the shared database objects during import. Start the apply process, propagation, and capture process. You must manually perform the following steps: Create a database link from the downstream database (SITE1.NET) to the source database (SITE3.NET). Configure redo log transport services from the source database to the downstream database. Oracle Database 11g: Implement Streams I - 96

97 Oracle Database 11g: Implement Streams I - 97
Replicating Tables DECLARE tab_array DBMS_UTILITY.UNCL_ARRAY; BEGIN tab_array(0) := 'HR.EMPLOYEES'; tab_array(1) := 'HR.DEPARTMENTS'; DBMS_STREAMS_ADM.MAINTAIN_TABLES( table_names => tab_array, source_directory_object => 'SOURCE_DIR', destination_directory_object => 'DEST_DIR', source_database => 'SITE1.NET', destination_database=>'SITE2.NET', perform_actions => TRUE, capture_name => 'HR_CAPTURE', apply_name => 'HR_APPLY', bi_directional=> FALSE, instantiation => DBMS_STREAMS_ADM.INSTANTIATION_TABLE_NETWORK); Replicating Tables You can use the MAINTAIN_TABLES procedure to easily configure an Oracle Streams environment that replicates changes to the specified tables between two databases. There are two forms of this procedure: In one, the table_names parameter is of the VARCHAR2 type. In the other, the table_names parameter is of the DBMS_UTILITY.UNCL_ARRAY type. These parameters enable you to enter the list of tables in different ways. The two forms are mutually exclusive. To use this procedure, you specify the tables that you want to replicate between sites and provide various configuration details. This example uses default values for the following arguments: script_name (NULL) script_directory_object (NULL) dump_file_name (NULL) capture_queue_table (NULL) capture_queue_name (NULL) capture_queue_user (NULL) Oracle Database 11g: Implement Streams I - 97

98 Oracle Database 11g: Implement Streams I - 98
Replicating Tables (continued) apply_queue_table (NULL) apply_queue_name (NULL) apply_queue_user (NULL) propagation_name (NULL) log_file (NULL) include_ddl (FALSE) Usage Note While the MAINTAIN_TABLES procedure is running, or the script generated by this procedure is running, there should be no DML or DDL changes made to the shared database objects at the destination database. Oracle Database 11g: Implement Streams I - 98

99 Troubleshooting the Configuration Procedures
Use the DBA_RECOVERABLE_SCRIPT* views to monitor the progress of the procedure. Recover or abort failed attempts by the RECOVER_OPERATION procedure in the DBMS_STREAMS_ADM package: operation_mode = 'FORWARD' operation_mode = 'ROLLBACK' operation_mode = 'PURGE' Troubleshooting the Configuration Procedures When one of the simplified configuration procedures configures the replication environment directly (the perform_actions parameter is set to TRUE), information about the configuration actions is stored in the following data dictionary views while the procedure is running: DBA_RECOVERABLE_SCRIPT: Provides details about recoverable operations DBA_RECOVERABLE_SCRIPT_PARAMS: Provides details about recoverable operation parameters DBA_RECOVERABLE_SCRIPT_BLOCKS: Provides details about recoverable script blocks and the actions being performed DBA_RECOVERABLE_SCRIPT_ERRORS: Provides details about errors that occurred during script execution When the procedure completes successfully, metadata about the configuration operation is purged from these views. However, when one of these procedures encounters an error and stops, metadata about the configuration operation remains in these views. Typically, these procedures encounter errors when one or more prerequisites for running them is not met. If the perform_actions parameter is set to FALSE when one of the simplified configuration procedures is run, and a script is used to configure the Streams replication environment, the data dictionary views are not populated. Oracle Database 11g: Implement Streams I - 99

100 Oracle Database 11g: Implement Streams I - 100
Troubleshooting the Configuration Procedures (continued) If one of the MAINTAIN_* procedures encounters an error, you can use the RECOVER_OPERATION procedure in the DBMS_STREAMS_ADM package to roll the operation forward, roll the operation back, or purge the metadata about the operation: FORWARD: This option attempts to complete the configuration operation from the point at which it failed. Before specifying this option, correct the conditions that caused the errors reported in the DBA_RECOVERABLE_SCRIPT_ERRORS view. ROLLBACK: This option rolls back all actions performed by the configuration procedure. If the rollback is successful, this option also purges the metadata about the operation in the data dictionary views described previously. PURGE: This option purges the metadata about the operation in the data dictionary views described previously without rolling the operation back. Oracle Database 11g: Implement Streams I - 100

101 Viewing the Configuration Progress
SQL> SELECT * FROM DBA_RECOVERABLE_SCRIPT; SCRIPT_ID CREATION_ INVOKING_PACKAGE_OWNER INVOKING_PACKAGE INVOKING_PROCEDURE INVOKING_USER STATUS TOTAL_BLOCKS DONE_BLOCK_NUM SCRIPT_COMMENT 0D8926B7C34BB161E0409C0AC018420A 24-FEB-06 SYS DBMS_STREAMS_ADM MAINTAIN_SIMPLE_TTS STRMADMIN EXECUTING Viewing the Configuration Progress Each recoverable operation has a unique identifier. With the SCRIPT_ID identifier, you can determine the current status of the recoverable operation, how many blocks have been created, and how many blocks have completed. By querying the DBA_RECOVERABLE_SCRIPT_BLOCKS view, you can view the contents of each block created for the recoverable operation. The contents of the FORWARD_BLOCK column show the block that is currently being executed and the contents of the UNDO_BLOCK column show what is performed for a ROLLBACK of the recoverable script. The DBA_RECOVERABLE_SCRIPT_BLOCKS view also shows the status of each individual block. An example of this information is shown on the next page. Oracle Database 11g: Implement Streams I - 101

102 Oracle Database 11g: Implement Streams I - 102
Viewing the Configuration Progress (continued) SQL> select script_id, block_num, status, block_comment 2 from dba_recoverable_script_blocks 3 order by block_num; SCRIPT_ID BLOCK_NUM STATUS FORWARD_BLOCK 0D8926B7C34BB161E0409C0AC018420A EXECUTED -- -- Add supplemental log group for table "HR"."DEPT" 0D8926B7C34BB161E0409C0AC018420A EXECUTED -- Add supplemental log group for table "HR"."EMP" 0D8926B7C34BB161E0409C0AC018420A EXECUTED -- Set up queue "STRMADMIN"."EURO$APPQ" 0D8926B7C34BB161E0409C0AC018420A EXECUTED -- APPLY changes for table "HR"."DEPT" 0D8926B7C34BB161E0409C0AC018420A EXECUTED -- APPLY changes for table "HR"."EMP" 0D8926B7C34BB161E0409C0AC018420A EXECUTED -- Get tag value to be used for Apply 0D8926B7C34BB161E0409C0AC018420A EXECUTING -- Set up queue "STRMADMIN"."AMER$CAPQ" 0D8926B7C34BB161E0409C0AC018420A NOT EXECUTED -- PROPAGATE changes for table "HR"."DEPT" 0D8926B7C34BB161E0409C0AC018420A NOT EXECUTED -- Do not PROPAGATE back changes for table HR.DEPT 0D8926B7C34BB161E0409C0AC018420A NOT EXECUTED -- PROPAGATE changes for table "HR"."EMP" Oracle Database 11g: Implement Streams I - 102

103 Removing All Streams Components
EXECUTE DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION(); Capture Apply2 Apply1 Handler Removing All Streams Components There are two ways to remove the entire Streams configuration from a database: Drop the STRMADMIN user with the CASCADE option, if all Streams components were created as this user. Use the REMOVE_STREAMS_CONFIGURATION procedure in the DBMS_STREAMS_ADM package. The REMOVE_STREAMS_CONFIGURATION procedure removes an Oracle Streams configuration at the local database. Specifically, this procedure performs the following actions at the local database: Capture This stops, and then drops all capture processes. If a table, schema, or database has been prepared for instantiation, this procedure aborts the preparation for instantiation for the table, schema, or database by using the appropriate ABORT_<level>_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package. Oracle Database 11g: Implement Streams I - 103

104 Oracle Database 11g: Implement Streams I - 104
Removing All Streams Components (continued) Propagation Disables all propagation jobs used by propagations Drops the propagations that were created by using either the DBMS_STREAMS_ADM package or the DBMS_PROPAGATION_ADM package. It does not drop propagations that were created by using the DBMS_AQADM package. Apply Stops, and then drops all apply processes. If there are apply errors in the error queue for an apply process, this procedure deletes these apply errors before it drops the apply process. Removes specifications for any handlers used by an apply process, but does not delete the PL/SQL procedures used by these handlers Removes update conflict handlers Removes the instantiation SCNs and ignores SCNs for all instantiated objects Removes any specifications for substitute key columns for apply tables Rules Drops rules that were created by using the DBMS_STREAMS_ADM package Usage Notes You cannot undo this procedure if you execute it. You cannot use this procedure to remove only a part of a Streams environment. You must run this procedure only if you are sure that you want to remove the entire Oracle Streams configuration at a database. This procedure does not remove the queue or the queue table. To do this, either explicitly drop them, or issue the SQL command DROP USER .. CASCADE for the Streams administrator. This procedure commits multiple times. If the procedure fails to complete, you must run it again. Your Oracle Streams environment will not function correctly until the removal process has completed. You can run this procedure as many times as needed to complete the removal process. Oracle Database 11g: Implement Streams I - 104

105 Removing an Oracle Streams Queue
EXECUTE DBMS_STREAMS_ADM.REMOVE_QUEUE( - queue_name => 'HR.hr_streams_queue', - drop_unused_queue_table => TRUE, - cascade => TRUE); Capture Prop_to_SITE2 Removing an Oracle Streams Queue The REMOVE_QUEUE procedure removes the Streams queue specified by queue_name from the database. If drop_unused_queue_table is set to TRUE, the queue table used by the queue is dropped when the last queue on the table is dropped; otherwise, the queue table is not dropped. This procedure must never be executed before removing the Streams processes. The REMOVE_QUEUE procedure performs the following actions: Waits until all current enqueue and dequeue transactions commit Stops the queue, which means that no further enqueues into the queue or dequeues from the queue are permitted Drops the queue Drops the queue table if the drop_unused_queue_table parameter is set to TRUE and the queue table being dropped is empty and not being used by any other queue Drops all Streams clients using the queue if the cascade parameter is set to TRUE Oracle Database 11g: Implement Streams I - 105

106 Oracle Database 11g: Implement Streams I - 106
Summary In this lesson, you should have learned how to: Provide an overview of the Streams configuration procedures Configure replication of a tablespace between two databases Configure replication of a table, schema, or entire database with a single procedural call Drop a queue table Remove the Oracle Streams configuration from a database Oracle Database 11g: Implement Streams I - 106

107 Practice 3 Overview: Configuring Replication Between Two Databases
This practice covers the following topics: Using the MAINTAIN_SCHEMAS procedure to implement bi-directional replication of the HR schema and all objects within it Modifying a table and verifying that the change is replicated to the other database Practice 3 Overview: Configuring Replication Between Two Databases In this practice, you use the MAINTAIN_SCHEMAS procedure to configure the replication of the HR schema and its contents between the AMER and EURO databases. After the configuration is complete, you test the system by modifying a table on the AMER database and seeing the change appear on the EURO database. Oracle Database 11g: Implement Streams I - 107

108 Result of Practice 3: Bi-directional Schema Replication
AMER database EURO database HR_CAP_Q HR_APPLY_Q HR_CAP HR_PROPAGATION HR_APPLY HR schema HR schema HR_APPLY_2 HR_PROPAGATION_2 HR_CAP_2 HR_APPLY_Q HR_CAP_Q Oracle Database 11g: Implement Streams I - 108

109 Customizing Streams with Rules

110 Oracle Database 11g: Implement Streams I - 110
Objectives After completing this lesson, you should be able to: Identify the different types of system-created rules Use a negative rule set to prevent a Streams client from performing its task on an individual object Use subset rules to restrict the data that flows through the stream Customize system-created rules Query the data dictionary for information on system-created rules Manage Streams process rules Oracle Database 11g: Implement Streams I - 110

111 Using Rules in Oracle Streams
The following Streams clients use rules and rule sets: Capture process Synchronous capture Propagation job Apply process Messaging client The same rule set can be used by multiple Streams clients in the same database. Multiple rules in a rule set are combined with the OR operator. Using Rules in Oracle Streams In a replication environment, an Oracle Streams client performs its task if a database change satisfies its rule sets. In general, a change satisfies the rule sets for an Oracle Streams client if no rules in the negative rule set evaluate to TRUE for the change, and at least one rule in the positive rule set evaluates to TRUE for the change. The negative rule set is always evaluated first. You can use rule sets in Streams to do the following: Specify the changes that a capture process captures from the redo log. If a change that is found in the redo log causes any rule in the rule set that is associated with a capture process to evaluate to TRUE, the change is enqueued in the Streams queue by the capture process. Specify the messages that a propagation job propagates from one queue to another. If a message in a queue causes any rule in the rule set that is associated with a propagation to evaluate to TRUE, the message is propagated by the propagation job. Specify the messages that an apply process retrieves from a queue. If a message in a queue causes any rule in the rule set that is associated with an apply process to evaluate to TRUE, the message is retrieved and processed by the apply process. Specify that user-enqueued messages must be propagated, must be applied through a message handler, or must be available for explicit dequeue by a messaging client. Oracle Database 11g: Implement Streams I - 111

112 Using Rules in Oracle Streams
You can use rules to define the granularity at which a capture process, propagation job, or apply process replicates data. Levels of granularity: Global Schema Object Apply Rules Propagate Capture Object Schema Global Using Rules in Oracle Streams (continued) Global granularity implies using rules for Streams processes that operate at the database level. Object-level granularity includes the following: Table Subset of rows in a table Only data manipulation language (DML) operations Only data definition language (DDL) operations Subset of operations that are performed against a table Oracle Database 11g: Implement Streams I - 112

113 Generating System-Created Rules
You can use procedures in the DBMS_STREAMS_ADM package to generate system-created rules. For a specific component: ADD_TABLE_RULES ADD_SCHEMA_RULES ADD_GLOBAL_RULES ADD_SUBSET_RULES ADD_*_PROPAGATION_RULES For multiple components: MAINTAIN_SIMPLE_TTS, MAINTAIN_TTS MAINTAIN_GLOBAL MAINTAIN_SCHEMAS MAINTAIN_TABLES Generating System-Created Rules The DBMS_STREAMS_ADM package, one of a set of Streams packages, provides subprograms for adding and removing simple rules for capture, propagation, and apply at the table, schema, and database level. These rules support logical change records (LCRs), which include row LCRs and DDL LCRs. This package also contains several procedures for configuring and maintaining a Streams replication environment. For each of these procedures, you can specify whether you want to create DML rules, DDL rules, or both. DML and DDL rules can be configured independent of each other. For example, if you are creating rules for a capture process, you can create a DML rule for a table first, and then create a DDL rule for the same table in the future without modifying the DML rule that you created earlier. You can also create a DDL rule for a table first, and then add a DML rule for the same table in the future. Oracle Database 11g: Implement Streams I - 113

114 Generating System-Created Rules
The procedures in the DBMS_STREAMS_ADM package that generate system-created rules also perform the following actions: They can create a capture process, synchronous capture, propagation, or apply process if the specified process or job does not exist. They can create a rule set for the specified capture process, propagation job, or apply process if a rule set does not exist for the process or job. They can create zero or more rules and add the rules to the rule set for the process or job. Generating System-Created Rules (continued) The procedures in the DBMS_STREAMS_ADM package create the specified capture process, apply process, or propagation if it does not exist. If configuring rules for a capture process, these procedures also enable supplemental logging for the primary key, unique key, foreign key, and bitmap index columns in the tables that are prepared for instantiation. The primary key columns are unconditionally logged. The unique key, foreign key, and bitmap index columns are conditionally logged. These procedures create rules and associate them with the positive rule set that is used by a Streams component. If a rule set does not exist for a Streams component, it is created and associated with the Streams component, and the rules are added to this new rule set. If you want to perform the Streams task only for DML changes or only for DDL changes, only one rule is created. If, however, you want to perform the Streams task for both DML and DDL changes, a rule is created for each type of change. Synchronous capture can be created by the ADD_TABLE_RULES and the ADD_SUBSET_RULES procedures. Depending upon which procedure you use, you can perform other actions in addition to those mentioned here. Oracle Database 11g: Implement Streams I - 114

115 System-Created Rule: Example
EXECUTE DBMS_STREAMS_ADM.ADD_TABLE_RULES( - table_name=>'hr.employees', - streams_type=>'apply', - streams_name=>'apply_site1_lcrs', - queue_name=>'streams_queue', - source_database=>'site1.net'); This procedure generates the rule condition (below) on the “apply" database. System-created rule condition for the table-level rule: :dml.get_object_owner() = 'HR' and :dml.get_object_name() = 'EMPLOYEES' AND :dml.is_null_tag() = 'Y' AND :dml.get_source_database_name() = 'SITE1.NET' System-Created Rule: Example In the example in the slide, the DBMS_STREAMS_ADM package is used to create a table-level rule for applying DML changes to the HR.EMPLOYEES table. The system-created rule is placed in the rule set that is associated with the apply process. In this example, all the parameters in the ADD_TABLE_RULES procedure were not specified, so some default values were used, such as: include_dml defaults to TRUE include_ddl defaults to FALSE include_tagged_lcr defaults to FALSE inclusion_rule defaults to TRUE and_condition defaults to NULL The second code box shows the resulting rule condition, which Streams generates for you on the “apply database.” If you want to exclude certain changes from being captured, propagated, or applied in a Streams environment, instead of modifying the rule to contain the NOT conditional operator, you can use the DBMS_STREAMS_ADM package and create a negative rule by specifying a value of FALSE for the inclusion_rule parameter. Oracle Database 11g: Implement Streams I - 115

116 System-Created Rule Components
Use a built-in evaluation context May have one or more associated action contexts Evaluation context Capture rule set Evaluation context Schema rule Evaluation context Action context Table rule System-Created Rule Components System-created rule sets and rules use a built-in evaluation context in the SYS schema named STREAMS$_EVALUATION_CONTEXT. The PUBLIC user is granted the EXECUTE privilege on this evaluation context. This evaluation context defines the :dml, :ddl, and :lcr variables that are used in the rule condition of system-created rules. Streams uses action contexts for multiple purposes: Internal LCR transformations in subset rules User-defined rule-based transformations Enqueue directives (only apply) Execute directives (only apply) If an action context for a rule contains both a subset transformation and a user-defined rule-based transformation, the subset transformation is performed before the user-defined rule-based transformation. If multiple rules in a rule set have action contexts, you must ensure that only one rule evaluates to TRUE at any given time. When multiple rules in a rule set evaluate to TRUE for a given message, only one rule is returned and only one action context is implemented. If multiple rules with action contexts evaluate to TRUE for the same message, it is not possible to determine which action context will be implemented. It is recommended not to have multiple rules on the same object in the same rule set especially if the rules have different action contexts. Oracle Database 11g: Implement Streams I - 116

117 Using Subset Rules with Oracle Streams
Subset rules can be created for capture, propagation, and apply with the DBMS_STREAMS_ADM package. Site1 Site2 Propagation Apply Capture Using Subset Rules with Oracle Streams In Oracle database, you can create subset rules for capture, propagate, or apply. Subset rules enable you to specify that only those rows in the named table that match a specified DML condition (similar to a WHERE clause) are captured, propagated, or applied. Subset rules should be used: With a capture process when all destination databases of the capture process need only row changes for a data subset. Using subset rules on capture can improve data security, thereby allowing you to restrict certain rows from being placed in the data stream and being propagated to databases other than the source database. With a propagation or an apply process when some destinations in an environment need only a subset of the captured DML changes. Using subset rows on propagation reduces the network resources that are required by Oracle Streams by reducing the amount of data that is sent to a destination database. You can add subset rules to only the positive rule set for a capture process, propagation, and apply process. You cannot add subset rules to a negative rule set. Redo logs Oracle Database 11g: Implement Streams I - 117

118 Oracle Database 11g: Implement Streams I - 118
Creating Subset Rules BEGIN DBMS_STREAMS_ADM.ADD_SUBSET_RULES( table_name => 'hr.regions', dml_condition => 'region_id=2', streams_type => 'capture', streams_name => 'hr_capture', queue_name => 'hr_queue'); END; / Creating Subset Rules A subset rule is a special type of table rule for DML changes. You can create subset rules for capture and apply by using the ADD_SUBSET_RULES procedure. Use the ADD_SUBSET_PROPAGATION_RULES procedure to create subset rules for propagation. These procedures enable you to use a condition that is similar to the WHERE clause in a SELECT statement. The example in the slide creates a subset rule that enables Streams to capture changes for rows in the HR.REGIONS table, but only if they have a region ID of 2. The ADD_SUBSET_RULES procedure always creates three rules for three different types of DML operations on a table: INSERT, UPDATE, and DELETE. The ADD_SUBSET_RULES procedure does not create rules for DDL changes to the table. However, you can use the ADD_TABLE_RULES procedure to create a DDL rule for the table. When you use subset rules, an update operation may be converted into an insert or delete operation when it is captured, propagated, or applied. This automatic conversion is called row migration and is performed by an internal transformation that is specified automatically in the action context of a system-created subset rule. For the action context to function properly, the subset rule must be part of a positive rule set. Oracle Database 11g: Implement Streams I - 118

119 Oracle Database 11g: Implement Streams I - 119
Creating Subset Rules (continued) For an UPDATE statement to be successfully converted into an INSERT statement, unconditional supplemental logging must be specified at the source database for all columns in the subset condition and all columns in the tables at the destination database that will apply these changes. Oracle Database 11g: Implement Streams I - 119

120 Oracle Database 11g: Implement Streams I - 120
Row Subsetting online 04-JUN-04 2389 direct 05-MAY-04 2456 7 14-DEC-04 2451 9 4 ORDER_STATUS 18-NOV-04 2390 ORDER_MODE ORDER_DATE ORDER_ID 04-OCT-04 2453 5 31-OCT-04 2457 Row Subsetting Subset rules can be used to maintain a subset table in a different schema or in a different database. For example, you could use subset rules so that a Streams client operates only on specific orders, such as those that have not yet been billed (order_status < 5). The destination table would contain only those rows with an order_status of 4 or lower. If a row in the source table is updated and the order status is changed to a higher value, the UPDATE operation is transformed into a DELETE operation and the row is removed from the destination table because the row no longer satisfies the subset condition. dml_condition => 'order_status < 5' Oracle Database 11g: Implement Streams I - 120

121 Customizing System-Created Rules
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'hr', streams_type => 'apply', streams_name => 'apply_site1_lcrs', queue_name => 'strmadmin.streams_queue', include_dml => FALSE, include_ddl => TRUE, and_condition=> ':lcr.get_object_type() != ''INDEX''' ); END; / (:ddl.get_object_owner() = 'HR') and (:ddl.get_object_type() != 'INDEX') Customizing System-Created Rules Some of the procedures that create rules in the DBMS_STREAMS_ADM package allow you to specify an and_condition parameter, thereby enabling you to add conditions to system-created rules. The condition specified by the and_condition parameter is appended to the system-created rule condition by using an AND clause in the following way: (system_condition) AND (and_condition) The variable in the specified condition must be :lcr, which represents an LCR member subprogram. However, you cannot use the and_condition parameter when creating subset rules. In the example shown in the slide, the ADD_SCHEMA_RULES procedure creates a DDL rule that evaluates to TRUE only if the DDL command passed to the apply process is for an object in the HR schema, and the object is not an index. You can use LCR member subprograms that are specific to either row LCRs or DDL LCRs. When the rules are created, the :lcr variable is automatically converted to :dml or :ddl, depending on the type of rule created. If the LCR member subprogram does not apply to the type of rule that was created, the rule does not evaluate correctly. For example, do not use a row LCR–specific member subprogram in and_condition when generating DDL rules. Oracle Database 11g: Implement Streams I - 121

122 Oracle Database 11g: Implement Streams I - 122
Negative Rule Sets Positive rule set Negative rule set Capture Negative Rule Sets You can use a combination of positive and negative rule sets to configure the activity of a Streams client. In Enterprise Manager (EM), click Data Movement > Manage (in the Streams section), and then click the appropriate tab, such as Capture, to associate a rule set with a Streams process. With command line, if you want a Streams client to perform a task, you create a positive rule for the task by specifying a value of TRUE for the inclusion_rule parameter when generating the system-created rule. If you do not want the Streams client to perform a task, you create a negative rule for the task by specifying a value of FALSE for the inclusion_rule parameter when generating the system-created rule. For example, if you want DML changes to be applied to all tables in the HR schema except the JOB_HISTORY table, you must create a positive schema-level apply rule for the HR schema and a negative table-level apply rule for the HR.JOB_HISTORY table. Capture processes, propagations, apply processes, and messaging clients are clients of the rules engine. These Streams clients may have no rule set, or only a positive rule set, only a negative rule set, or both a positive and a negative rule set. Also, a single rule set may be a positive rule set for one Streams client and a negative rule set for another Streams client. A Streams client performs an action if a message satisfies the rule sets for the client. A message satisfies the rule sets if no rules in the negative rule set evaluate to TRUE for the message, and if at least one rule in the positive rule set evaluates to TRUE for the message. Oracle Database 11g: Implement Streams I - 122

123 Negative Rule Sets: Example
ruleset_pos ruleset_neg HR.JOB_HISTORY HR schema Capture Negative Rule Sets: Example Consider a Streams configuration in which a capture process exists on a database. There are two rule sets: ruleset_pos and ruleset_neg. The ruleset_pos positive rule set contains a single rule for the HR schema. This means that changes for all tables in the HR schema are eligible for capture. The ruleset_neg negative rule set contains a single rule for the HR.JOB_HISTORY table. The following example demonstrates how changes in the database are captured: If neither rule set is associated with the capture process, the capture process attempts to capture all changes in the redo logs. If only the ruleset_pos positive rule set is associated with the capture process, the changes only for the HR schema are captured (if possible). If only the ruleset_neg negative rule set is associated with the capture process, all changes that occur in the database—except those for the HR.JOB_HISTORY table—are captured (if possible). If both the ruleset_pos positive rule set and the ruleset_neg negative rule set are associated with the capture process, changes for the HR schema—except those for the HR.JOB_HISTORY table—are captured (if possible). Oracle Database 11g: Implement Streams I - 123

124 Rule Evaluation with Rule Sets
B Positive rule set TRUE C Negative rule set FALSE D Negative rule set FALSE Positive rule set TRUE Rule Evaluation with Rule Sets The following cases illustrate the method of rule evaluation when using rule sets: A) A Streams client with no rule sets performs its task for all messages. B) A Streams client with a positive rule set—but no negative rule set—performs its task for a message if any rule in the positive rule set evaluates to TRUE for the message. If all rules in the positive rule set evaluate to FALSE for the message, the Streams client discards the message. Note: If a Streams client has an empty positive rule set, everything is excluded. C) A Streams client with a negative rule set—but no positive rule set—discards a message if any rule in the negative rule set evaluates to TRUE for the message. If all rules in the negative rule set evaluate to FALSE for the message, the Streams client performs its task for the message. D) If a Streams client has both, a positive and a negative rule set, the negative rule set is evaluated first. If any rule in the negative rule set evaluates to TRUE for the message, the message is discarded and it is never evaluated against the positive rule set. Streams client Streams client Oracle Database 11g: Implement Streams I - 124

125 Oracle Database 11g: Implement Streams I - 125
Rule Evaluation with Rule Sets (continued) If all the rules in the negative rule set evaluate to FALSE for the message, the message is evaluated against the positive rule set. In this case, the behavior is the same as when the Streams client has only a positive rule set: the Streams client performs its task for a message only if a rule in the positive rule set evaluates to TRUE for the message. If all the rules in the negative rule set evaluate to FALSE for the message and all rules in the positive rule set evaluate to FALSE for the message, the Streams client discards the message. Oracle Database 11g: Implement Streams I - 125

126 Creating Negative Rules
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.locations', streams_type => 'capture', streams_name => 'capture1', queue_name => 'strmadmin.streams_queue', inclusion_rule => FALSE ); END; / Creating Negative Rules If you use the DBMS_STREAMS_ADM package to create the rules for the Streams client, you only need to specify a value of FALSE for the inclusion_rule argument to add the rule to the negative rule set used by the Streams client. If a negative rule set does not exist, it is automatically created. If the value of the inclusion_rule argument is TRUE, the rule is added to the positive rule set for the Streams client. The default value for inclusion_rule is TRUE. When you manually create a rule set by using procedures in the DBMS_RULE_ADM package, the rule set you create is neither positive nor negative. It is just a collection of rules. This rule set can then be assigned to a Streams client as either a positive rule set or a negative rule set. The rules are the same, but the method of interpreting these rules depends on whether you define the rule set as positive or negative for the Streams client. Oracle Database 11g: Implement Streams I - 126

127 Monitoring System-Created Rules
Streams client rules: [ALL | DBA]_STREAMS_RULES [ALL | DBA]_STREAMS_GLOBAL_RULES [ALL | DBA]_STREAMS_SCHEMA_RULES [ALL | DBA]_STREAMS_TABLE_RULES Streams client rule sets: [ALL | DBA]_CAPTURE [ALL | DBA]_PROPAGATION [ALL | DBA]_APPLY Message rules and rule sets: [ALL | DBA]_STREAMS_MESSAGE_RULES [ALL | DBA]_STREAMS_MESSAGE_CONSUMERS Monitoring System-Created Rules The Streams rules are rules that are created using the DBMS_STREAMS_ADM package or the Streams tool in the Oracle Enterprise Manager Console. The rule sets for a Streams client may also contain rules that are created using the DBMS_RULE_ADM package, and these rules also determine the behavior of the Streams client. The DBA_STREAMS_RULES and ALL_STREAMS_RULES data dictionary views display all rules in the rule sets for Streams clients, including Streams rules and rules created using the DBMS_RULE_ADM package. You can use the following data dictionary views to view rule information for rules that are created with the DBMS_STREAMS_ADM package: DBA_STREAMS_GLOBAL_RULES DBA_STREAMS_SCHEMA_RULES DBA_STREAMS_TABLE_RULES Oracle Database 11g: Implement Streams I - 127

128 Monitoring Negative Rule Sets
SQL> SELECT PROPAGATION_NAME, 2> NEGATIVE_RULE_SET_OWNER, 3> NEGATIVE_RULE_SET_NAME 4> FROM DBA_PROPAGATION; PROPAGATION_NAME NEGATIVE_RULE_SET_OWNER NEGATIVE_RULE_SET_NAME PROP_TO_SITE2 STRMADMIN RULESET$_31 Monitoring Negative Rule Sets To determine whether negative rule sets are being used in your Streams environment, query the NEGATIVE_RULE_SET_OWNER and NEGATIVE_RULE_SET_NAME columns of: DBA_APPLY DBA_CAPTURE DBA_PROPAGATION DBA_STREAMS_MESSAGE_CONSUMERS To display the rules in a negative rule set, you can use a query similar to the following: SELECT s.TABLE_NAME, s.SOURCE_DATABASE, s.RULE_TYPE, s.RULE_NAME, s.RULE_OWNER, s.INCLUDE_TAGGED_LCR FROM DBA_STREAMS_TABLE_RULES s, DBA_APPLY a, DBA_RULE_SET_RULES r WHERE s.STREAMS_NAME = 'APPLY_SITE1_LCRS' AND s.STREAMS_TYPE = 'APPLY' AND a.NEGATIVE_RULE_SET_OWNER = r.RULE_SET_OWNER AND a.NEGATIVE_RULE_SET_NAME = r.RULE_SET_NAME AND s.RULE_OWNER = r.RULE_OWNER AND s.RULE_NAME = r.RULE_NAME; If this query returns any rows, the rules returned can cause the apply process to discard LCRs containing the changes to the specified table. Oracle Database 11g: Implement Streams I - 128

129 Oracle Database 11g: Implement Streams I - 129
Monitoring Negative Rule Sets (continued) With formatting, the output for this query looks similar to the following: Apply Rule Tagged Table Name Source Type Rule Name Rule Owner LCRs? EMPLOYEES SITE1.NET DDL EMPLOYEES48 STRMADMIN YES EMPLOYEES SITE1.NET DML EMPLOYEES49 STRMADMIN YES Oracle Database 11g: Implement Streams I - 129

130 Managing Streams Process Rule Sets
Specify or remove a rule set for a process: DBMS_CAPTURE_ADM.ALTER_CAPTURE DBMS_APPLY_ADM.ALTER_APPLY DBMS_PROPAGATION_ADM.ALTER_PROPAGATION Add rules to a rule set for a process: DBMS_STREAMS_ADM.ADD_*_RULES DBMS_STREAMS_ADM.ADD_*_PROPAGATION_RULES DBMS_RULE_ADM.ADD_RULE Remove a rule from a rule set that is used by a Streams process: DBMS_STREAMS_ADM.REMOVE_RULE DBMS_RULE_ADM.REMOVE_RULE Managing Streams Process Rule Sets You can specify the existing rule set that you want to associate with a Streams process as a positive rule set by using the rule_set_name parameter in the ALTER_CAPTURE, ALTER_APPLY, or ALTER_PROPAGATION procedures. For example, the following procedure sets the positive rule set for a capture process named CAPTURE1 to the ORDER_APP_RS rule set. BEGIN DBMS_CAPTURE_ADM.ALTER_CAPTURE( capture_name => 'capture1', rule_set_name => 'ix.order_app_rs'); END; / To associate an existing rule set with a Streams component as a negative rule set, use the negative_rule_set_name parameter when altering the Streams component. To remove a rule set used by an existing Streams component, set the remove_rule_set or remove_negative_rule_set parameter to TRUE in the ALTER_CAPTURE, ALTER_APPLY, or ALTER_PROPAGATION procedures. Oracle Database 11g: Implement Streams I - 130

131 Oracle Database 11g: Implement Streams I - 131
Managing Streams Process Rule Sets (continued) For example, the following procedure removes the negative rule set from a propagation named PROP_TO_SITE2. BEGIN DBMS_PROPAGATION_ADM.ALTER_PROPAGATION( propagation_name => 'prop_to_site2', remove_negative_rule_set => true); END; / Note: If you remove a rule set for a Streams process, the process performs its task for all messages, excluding those for which the negative rules are defined. However, a capture process does not capture changes to database objects in the SYS, SYSTEM, or CTXSYS schemas. To add rules to the rule sets used by a Streams process, you can run one of the following procedures with the inclusion_rule parameter set to either TRUE (positive rules) or FALSE (negative rules): DBMS_STREAMS_ADM.ADD_TABLE_[PROPAGATION_]RULES DBMS_STREAMS_ADM.ADD_SCHEMA_[PROPAGATION_]RULES DBMS_STREAMS_ADM.ADD_GLOBAL_[PROPAGATION_]RULES DBMS_STREAMS_ADM.ADD_SUBSET_[PROPAGATION_]RULES To remove a system-created rule from the rule set for an existing process, use the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package. For example, the following procedure removes a rule named ORDERS6 from the rule set of a capture process named CAPTURE1, without dropping the rule from the database. DBMS_STREAMS_ADM.REMOVE_RULE( rule_name => 'STRMADMIN.orders5', streams_type => 'capture', streams_name => 'capture1', drop_unused_rule => false); If a rule was created with the DBMS_RULE_ADM package, you must use the REMOVE_RULE procedure in the DBMS_RULE_ADM package to remove the rule from a rule set. DBMS_RULE_ADM.REMOVE_RULE( rule_name => 'HR.hr_emp_dml', rule_set_name => 'HR.HR_4_RS'); Oracle Database 11g: Implement Streams I - 131

132 Adding New Rules: Example
BEGIN DBMS_STREAMS_ADM.ADD_SUBSET_PROPAGATION_RULES( table_name => 'HR.EMPLOYEES', dml_condition => 'job_id LIKE ''SA%''', streams_name => 'prop_to_site3',   source_queue_name =>'strmadmin.hr_queue',   destination_queue_name => source_database => NULL ); END; / Adding New Rules: Example If the PROP_TO_SITE3 propagation exists, running the procedure shown in the slide performs the following actions: Creates three rules and adds them to the positive rule set used by the propagation: A rule for update operations that satisfy the DML condition A rule for insert operations and converted update operations A rule for delete operations and converted update operations Creates action contexts for the rules with the name-value pairs ('STREAMS$_ROW_SUBSET', 'INSERT') and ('STREAMS$_ROW_SUBSET', 'DELETE'). The action contexts transform an update operation into an insert or delete operation in a conversion process known as row migration. Specifies that LCRs for the HR.EMPLOYEES table that contain a value starting with SA in the job_id column should be propagated from the HR_QUEUE in the local STRMADMIN schema to the STREAMS_QUEUE in the IX schema in the SITE3.NET database Oracle Database 11g: Implement Streams I - 132

133 Managing Rules and Rule Sets
By using the DBMS_RULE_ADM package, you can: Alter a rule Change a rule condition Change or remove the rule evaluation context Change or remove the rule’s action context Change or remove the comment for a rule Remove a rule from a rule set Drop a rule from the database Drop a rule set from the database Managing Rules and Rule Sets If you run the REMOVE_RULE procedure, the rule still exists in the database. If the rule was used in a rule set other than the one you remove it from, it still remains in those rule sets. The DROP_RULE procedure has a parameter named force that defaults to FALSE. This means that the rule cannot be dropped if it is in one or more rule sets. If force is set to TRUE, the rule is dropped from the database and automatically removed from any rule sets that contain it. The DROP_RULE_SET procedure has a parameter called delete_rules that defaults to FALSE. If the rule set contains any rules, these rules are not dropped. If the delete_rules parameter is set to TRUE, any rules in the rule set that are not in another rule set are dropped from the database. If some of the rules in the rule set are in one or more other rule sets, these rules are not dropped. Oracle Database 11g: Implement Streams I - 133

134 Oracle Database 11g: Implement Streams I - 134
Managing Rules and Rule Sets (continued) If you cannot create a rule with your preferred rule condition by using the DBMS_STREAMS_ADM package, you can create a new rule (with a condition that is based on a system-created rule) by following these general steps: 1. Copy the rule condition of the system-created rule. You can view the rule condition of a system-created rule by querying the DBA_STREAMS_TABLE_RULES, DBA_STREAMS_SCHEMA_RULES, or DBA_STREAMS_GLOBAL_RULES data dictionary views. 2. Modify the rule condition and use it to create a new rule. 3. Add the new rule to the rule set for the Streams process or propagation. 4. Remove the original rule if it is no longer needed. Use the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package to remove the system-created rule. Oracle Database 11g: Implement Streams I - 134

135 Oracle Database 11g: Implement Streams I - 135
Altering a Rule BEGIN DBMS_RULE_ADM.ALTER_RULE( rule_name => 'strmadmin.hr_emp_dml', condition => ':dml.get_object_owner()=''HR'' AND :dml.get_object_name()=''EMPLOYEES'' AND :dml.get_compatible()= DBMS_STREAMS.COMPATIBLE_9_2', evaluation_context => NULL); END; / Altering a Rule Assume that you want to change the condition of the rule that was created previously. The condition in the existing HR_EMP_DML rule evaluates to TRUE for any DML change to the EMPLOYEES table in the HR schema. If you want to exclude changes that are not compatible with an Oracle9i, Release 2 Streams database, you can alter the rule to exclude messages with a higher compatibility value. The procedure in the slide alters the rule in this way. Note: Changing the condition of a rule affects all rule sets that contain the rule. Altering system-created rules is not recommended. If you want to create system-created rules to exclude messages based on the compatibility level, you can use the and_condition parameter of ADD_*_RULES. For example: DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.departments', streams_type => 'apply', streams_name=>'apply_site1_lcrs',queue_name =>'ix.streams_queue', include_tagged_lcr => true, and_condition=>':lcr.get_compatible()=DBMS_STREAMS.COMPATIBLE_9_2'); You could also create a negative rule that checks at a schema or global level and evaluates to TRUE for messages with a higher compatibility level, instead of updating the rule condition of each rule. Oracle Database 11g: Implement Streams I - 135

136 Oracle Database 11g: Implement Streams I - 136
Dropping Rule Sets Use the drop_unused_rule_sets parameter of: DBMS_CAPTURE_ADM.DROP_CAPTURE DBMS_APPLY_ADM.DROP_APPLY DBMS_PROPAGATION_ADM.DROP_PROPAGATION Drop the rule sets along with the Streams client EXECUTE DBMS_CAPTURE_ADM.DROP_CAPTURE( - capture_name => 'hr_capture', - drop_unused_rule_sets => TRUE); Dropping Rule Sets If drop_unused_rule_sets is set to TRUE and the rule sets used by the Streams client that is being dropped are used by another Streams client in the same database, the rule set is not dropped. If the rule sets are not used by any other Streams client, they are removed from the database. Rules in those rule sets that are not used in other rule sets are also dropped. The default value of drop_unused_rule_sets is FALSE for backward compatibility and safety. If you need to re-create a capture or apply process, but want to use the same rules for the new capture or apply process, set drop_unused_rule_sets to FALSE. Then, create the new capture process by calling DBMS_CAPTURE_ADM.CREATE_CAPTURE or the new apply process by calling DBMS_APPLY_ADM.CREATE_APPLY and specify the name of the rule set that was used before the capture or apply process was dropped. Oracle Database 11g: Implement Streams I - 136

137 Oracle Database 11g: Implement Streams I - 137
Summary In this lesson, you should have learned how to: Identify the different types of system-created rules Use a negative rule set to prevent a Streams client from performing its task on an individual object Use subset rules to restrict the data that flows through the stream Customize system-created rules Query the data dictionary for information on system-created rules Manage Streams process rules Oracle Database 11g: Implement Streams I - 137

138 Practice 4-1 Overview: Adding and Testing Rules
This practice covers using the ADD_TABLE_RULES procedure to create a negative rule set, which excludes the HR.JOB_HISTORY table from replication. HR_CAP_Q HR_APPLY_Q HR_CAP HR_PROPAGATION HR_APPLY HR schema Exclude HR.JOB_HISTORY table Exclude HR.JOB_HISTORY table HR schema Practice 4-1 Overview: Adding and Testing Rules Before you can work on this practice, you must complete the previous one. AMER database EURO database Oracle Database 11g: Implement Streams I - 138

139 Practice 4-2 Overview: Removing Streams Configurations in EM
This practice covers removing all Streams configurations from both databases. schemas schemas AMER database Practice 4-2 Overview: Removing Streams Configurations in EM In the first few lessons, you used APIs and Enterprise Manager to implicitly create the Streams processes for you. At the end of this practice, you delete your Streams environment to avoid dependencies for the next one. In the following practice sessions, you create Streams processes step by step, and learn more about using and customizing them. EURO database Oracle Database 11g: Implement Streams I - 139

140 Capture Process: Concepts

141 Oracle Database 11g: Implement Streams I - 141
Objectives After completing this lesson, you should be able to: Describe the functionality of the capture process List the types of changes that are captured Describe the capture process architecture Describe Streams tags and the ways in which they are used during capture Oracle Database 11g: Implement Streams I - 141

142 Oracle Database 11g: Implement Streams I - 142
Capture Streams messages can be enqueued in two ways. Implicitly: Redo-based capture of DML and DDL changes Synchronous capture (part of DML user transactions) Explicitly: Direct enqueue of user messages Messages are enqueued in a staging area. SGA Streams pool Rules Enqueue Capture The capture process is a background process that looks for the occurrence of specific messages in the redo stream. These messages can be data manipulation language (DML) or data definition language (DDL) messages: Rows inserted, updated, or deleted in particular tables Changes to the table structure Object creation statements Oracle Streams messages are placed in a staging area. These messages can be enqueued in two ways. Implicitly: A capture process mines DML and DDL messages from the redo logs. The capture process then uses the rules in its positive and negative rule sets to determine which messages to put in the buffer queue in memory and which messages to discard. In synchronous capture, a capture process captures DML as part of the user transaction. For details, see the lesson titled “Configuring Synchronous Capture.” Explicitly: Applications explicitly generate messages and enqueue them in the staging queue in the database. Additionally, you can enqueue messages explicitly into the same staging queues as the Streams captured ones. Your messages may (or may not) be associated with particular data messages. User-enqueued messages can be routed through the Oracle Streams environment in the same manner as data change messages. An exception to using queues is “Combined Capture and Apply.” Capture Oracle Database 11g: Implement Streams I - 142

143 Combined Capture and Apply
Automatic optimization: Direct data transfer from capture to apply Via database link Without queues Capture Apply DBLink Combined Capture and Apply In Oracle Database 11g, Release 1, a capture process automatically sends logical change records (LCRs) directly to an apply process when there is a single publisher and consumer defined for the queue that contains the captured changes. This optimized configuration is called combined capture and apply. When combined capture and apply (CCA) is in use, LCRs are transmitted directly from the capture process to the apply process via a database link. In this mode, the capture does not stage the LCR in a queue or use queue propagation to deliver LCRs. The GV$STREAMS_CAPTURE and GV$STREAMS_APPLY_READER views reflect connect details and statistics when CCA is in use. Oracle Database 11g: Implement Streams I - 143

144 Oracle Database 11g: Implement Streams I - 144
Redo-Based Capture User changes Queue LCR User msg . . . Capture process Redo Log Log Database objects Capture Enqueue Log-based change capture provides low-overhead, low-latency change capture: Changes to the database are written to the redo log buffers. Oracle Streams extracts changes from the log buffers. Changes are formatted as an LCR, a representation of the change. LCRs are placed in the Streams staging area for further processing. Redo-Based Capture Capturing changes directly from the redo log buffers minimizes the overhead on the source system. Log-based capture leverages the fact that the changes made to tables are logged to guarantee recoverability in the event of a crash or media failure. The database provides supplemental logging to log additional information into the redo stream (such as primary key columns) to facilitate the delivery of the captured changes. The capture process: Retrieves the change data that is extracted from the redo log buffer Formats it into an LCR Places it in a staging area for further processing The capture process can intelligently filter LCRs based on defined rules. Thus, only changes to desired objects are captured. Capture supports mining the online redo logs as well as mining archived log files when needed. In the case of mining online redo logs, the redo stream is mined for change data shortly after the redo is generated, thereby reducing the latency of the capture. Oracle Database 11g: Implement Streams I - 144

145 Capture Process: Components
The capture process uses the following parallel execution servers to capture changes concurrently: One reader server to read changes from the redo log and divide the redo log into regions A preparer server to scan the redo log regions and prefilter the change messages One builder server to merge redo records from the preparer servers cp01 ms0 read ms01 prep ms02 build Capture Process: Components At the operating-system level, the capture processes are named cp01, cp02, and so on. The cpnn process acts as a coordinator for the other capture slave (background) processes. If you use the default value for capture process parallelism (parallelism = 1), there will be four background processes used by the capture process mechanism: One reader server that reads the redo log to find changes One preparer server that formats the changes found in the redo logs by the reader server in parallel and performs prefiltering of these changes. Prefiltering involves sending partial information about changes (such as schema and object name for a change) to the rules engine for evaluation and receiving the results of the evaluation. If the parallelism parameter is set to a value greater than 1 for the capture process, additional preparer servers are added. One builder server that merges redo records from the preparer servers. The builder server preserves the SCN order of these redo records and passes the merged redo records to the capture process. The cpnn process gets the transactions from the builder server, formats the changes into LCRs, and performs the final rule evaluation on these LCRs. Depending on the rule evaluation results, the LCR is either enqueued into the staging area or discarded. Oracle Database 11g: Implement Streams I - 145

146 Identifying Changes to Capture
A capture process uses rules to determine which changes are captured. The capture process completes the following steps: Finds changes in the redo logs Prefilters changes at the object and schema level Formats the changes as LCRs Performs LCR filtering and discards selected changes Enqueues the remaining LCRs Identifying Changes to Capture A capture process captures changes or discards changes based on the rules that you define. Each rule specifies the database objects for which the capture process captures changes, the types of changes to capture, and whether the change should be included or not. The capture process never captures changes made to objects in the SYS, SYSTEM, or CTXSYS schemas. You can create table, schema, database (global), or subset rules for capture. During prefiltering, a capture process uses rules to partially evaluate the changes found in the redo logs at the object level and schema level to determine whether to continue processing a change or to discard it. Prefiltering is a safe optimization that is performed with incomplete information. During prefiltering: A change is discarded if the capture process can ensure that at least one negative rule associated with the capture process would evaluate to TRUE for the change A change is discarded if the capture process can ensure that none of the positive rules associated with the capture process will evaluate to TRUE for the change If the results of the partial evaluation are uncertain, the change is not discarded Changes that are not discarded are sent to the capture builder process for further processing. Oracle Database 11g: Implement Streams I - 146

147 Oracle Database 11g: Implement Streams I - 147
Identifying Changes to Capture (continued) During LCR filtering, the capture process performs a full evaluation of those LCRs for which the partial evaluation was inconclusive. For example, the following rule condition, which checks whether the quantity ordered for an item is zero, cannot be evaluated by using only the object owner and name: ((:dml.get_object_owner() = 'OE' AND :dml.get_object_name() = 'ORDER_ITEMS') AND ((:dml.get_command_type() IN ('UPDATE','INSERT')) AND (:dml.get_value('NEW','QUANTITY') IS NOT NULL) AND (:dml.get_value('NEW','QUANTITY').AccessNumber()= 0))) Based on the results of this final evaluation, LCRs are discarded if the rules in the negative rule set evaluate to TRUE or the rules in the positive rule set evaluate to FALSE. LCRs that satisfy the rules in the positive rule set and do not evaluate to TRUE for rules in the negative rule set for the capture process are enqueued into the queue that is associated with the capture process. Oracle Database 11g: Implement Streams I - 147

148 Oracle Database 11g: Implement Streams I - 148
Data Types Captured A capture process: Captures changes made to tables with columns of all Oracle data types, including: Columns protected by TDE XMLType columns stored as CLOB With the exception of: BFILE ROWID User-defined types (including object types, REFs, VARRAYs, and nested tables) Oracle-supplied object types Raises an error if changes are made to tables with columns defined by unsupported data types Data Types Captured A capture process does not capture the results of DML changes to columns of the following data types: BFILE, ROWID, and user-defined types (including object types, REFs, VARRAYs, nested tables, and Oracle-supplied types). A capture process can capture changes to columns that use transparent data encryption (TDE) in Oracle Database 11g. A capture process raises an error if it tries to create a row LCR for a DML change involving a column of an unsupported data type. The capture process writes the LCR into its process trace file, signals an error, and is disabled. You can query the ALL_STREAMS_UNSUPPORTED view to determine which database tables are not supported in the current release (Oracle Database 11g) and whether these objects are automatically ignored by the Streams processes. To reenable the capture process, you must modify the rules used by the capture process to avoid capturing the unsupported data type, and then restart the capture process. In Oracle Database 11g, Oracle Streams can capture, propagate, and apply changes to the XMLType data. Capture processes can capture changes to XMLType columns that are stored as CLOB columns, but they cannot capture changes to XMLType columns that are stored as relational objects or as binary XML. Apply processes can apply changes to XMLType columns that are stored as CLOB columns, as relational objects, or as binary XML. Oracle Database 11g: Implement Streams I - 148

149 Oracle Database 11g: Implement Streams I - 149
Data Types Captured (continued) You can add rules to a negative rule set for a capture process that instruct the capture process to discard changes to tables with columns of unsupported data types. However, if these rules are not simple rules, a capture process may create a row LCR for the change and continue to process it. In this case, a change that includes an unsupported data type may cause the capture process to raise an error, even if the change does not satisfy the rule sets used by the capture process. The DBMS_STREAMS_ADM package creates only simple rules. Note When using Oracle Streams with an earlier release, there may be additional data types that are not supported. If your Streams environment includes one or more older databases, ensure that the row LCRs are not sent to a database that does not support all data types in the row LCRs. You can query the ALL_STREAMS_NEWLY_SUPPORTED view in an Oracle Database 11g instance to determine which database tables are not supported for earlier databases, as well as the reason that the tables are not supported. A capture process does not capture the results of DML changes to columns of the following data types: SecureFile CLOB, NCLOB, and BLOB BFILE ROWID User-defined types (including object types, REFs, varrays, and nested tables) XMLType stored object relationally or as binary XML The following Oracle-supplied types: any types, URI types, spatial types, and media types In addition, a capture process does not capture the results of DML changes to virtual columns. A capture process raises an error if it tries to create a row LCR for a DML change to a column of an unsupported data type. When a capture process raises an error, it writes the LCR that caused the error into its trace file, raises an ORA error, and is disabled. In this case, modify the rules used by the capture process to avoid the error, and restart the capture process. Oracle Database 11g: Implement Streams I - 149

150 Streams Support for Transparent Data Encryption
Oracle Streams now provides the ability to transparently: Decrypt values that are protected by Transparent Database Encryption (TDE) for filtering, processing, and so on Reencrypt values so that they are never in clear while on disk Capture Staging Apply Streams Support for Transparent Data Encryption In Oracle Database 11g, Oracle Streams supports Transparent Data Encryption. Oracle Streams provides the ability to transparently: Decrypt values protected by Transparent Database Encryption (TDE) for filtering, processing, and so on Reencrypt values so that they are never in clear while on disk (as opposed to memory) If the corresponding column in the apply database has TDE support, the applied data is transparently reencrypted using the keys of the local database. If the column value was encrypted at the source, and the corresponding column in the apply database is not encrypted, the apply process raises an error unless the PRESERVE_ENCRYPTION apply parameter is set to FALSE. Whenever LCRs are stored on disk, such as due to queue or apply spilling and apply error creation, the data is encrypted if the local database supports TDE. This is done transparently without any user intervention. LCR message tracing does not display clear text of the encrypted column values. TDE enables encryption of sensitive data in database columns stored in the operating system files. In addition, it provides secure storage and management of encryption keys in a security module external to the database. Streams support for TDE enables added security of replicated data and application data, as well as any messages spilled to disk from queues or by the apply process. Additionally, it does not require any changes to applications and is transparent to the end user. Oracle Database 11g: Implement Streams I - 150

151 Oracle Database 11g: Implement Streams I - 151
Wallet Management Local capture: Single wallet with full history No need for additional wallet management Downstream capture: “Source wallet” must be available for capture process (Wallet copied to downstream database) Wallet Management The Streams local capture process has access to the single wallet that keeps track of the full history of the keys for a database. No additional wallet management is necessary. For downstream capture, the wallet from the source database must be transported or copied to the downstream database so that it can be made available for the capture process. Oracle Database 11g: Implement Streams I - 151

152 Types of DML Changes Captured
A capture process supports the following DML changes: INSERT, UPDATE, and MERGE DELETE Piecewise updates to large objects (LOBs) A capture process does not capture changes made to: Temporary or object tables SYS and SYSTEM objects Capture of unsupported data types causes the capture process to raise an error. Types of DML Changes Captured The capture process converts MERGE changes to INSERT and UPDATE changes. MERGE is not a valid operation type in a row LCR. A capture process captures the changes made to: Relational tables Clustered tables Partitioned tables Index-organized tables (with certain restrictions) A capture process does not capture the DML changes made to: Temporary tables (because no redo is generated) Object tables A capture process captures neither the DML nor the DDL changes made to: Objects in the SYS, SYSTEM, and CTXSYS schemas User-defined types If you share a sequence at multiple databases, the sequence values that are used for individual rows within these databases may vary. Also, the changes to the actual sequence values are not captured. For example, if a user inserts a row by using NEXTVAL, the capture process does not capture the change to the sequence resulting from this operation. Oracle Database 11g: Implement Streams I - 152

153 Oracle Database 11g: Implement Streams I - 153
Types of DML Changes Captured (continued) A capture process can capture the changes made to an index-organized table as long as the following conditions are met: The index-organized table does not contain any columns of the following data types: ROWID UROWID User-defined types (including object types, REFs, VARRAYs, and nested tables) The index-organized table does not have row movement enabled if it is partitioned. If an index-organized table does not meet these requirements, a capture process raises an error when a user makes a change to the index-organized table and the change satisfies the capture process rule sets. Oracle Database 11g: Implement Streams I - 153

154 Oracle Database 11g: Implement Streams I - 154
Types of DDL Captured A capture process supports DDL changes to the following types of objects: Tables Indexes Views Sequences Synonyms PL/SQL packages, procedures, and functions Triggers Changes to users or roles GRANT or REVOKE on users or roles Types of DDL Captured You can use the following types of rules to capture the changes to these database objects: Table rules instruct a capture process to capture the DDL changes only to tables. Schema rules instruct a capture process to capture the DDL changes to tables, indexes, triggers, views, synonyms, and users. Regarding users, however, GRANT and REVOKE changes are captured only if you specify global rules for a capture process. Global rules instruct a capture process to capture the DDL changes to all types of database objects listed in the slide. A capture process can capture DDL statements but not the results of DDL statements, unless the DDL statement is a CREATE TABLE AS SELECT statement. For example, when a capture process captures an ANALYZE statement, it does not capture the statistics generated by the ANALYZE statement. However, when a capture process captures a CREATE TABLE AS SELECT statement, it captures the statement itself and all the rows that are selected (as INSERT row LCRs).os Oracle Database 11g: Implement Streams I - 154

155 Commands That Are Not Captured
ALTER SESSION ALTER SYSTEM CALL or EXECUTE for PL/SQL procedures EXPLAIN PLAN LOCK TABLE SET ROLE NOLOGGING or UNRECOVERABLE operations FLASHBACK DATABASE Commands That Are Not Captured A capture process captures the DDL changes that satisfy the rules in the capture process rule set, but ignores the following types of changes: The session control statements ALTER SESSION and SET ROLE The system control statement ALTER SYSTEM Invocations of PL/SQL procedures, which means that a call to a PL/SQL procedure is not captured. However, if a call to a PL/SQL procedure causes changes to the database objects, these changes may be captured by a capture process if the changes satisfy the capture process rule sets. FLASHBACK DATABASE statements If you use the NOLOGGING keyword for a SQL operation, the changes resulting from the SQL operation cannot be captured by a capture process. Therefore, it is recommended not to use this keyword if you want to capture the changes that result from a SQL operation. Likewise, if you use the UNRECOVERABLE clause in the SQL*Loader control file for a direct path load, the changes resulting from the direct path load cannot be captured by a capture process. If you perform an UNRECOVERABLE direct path load at a source database but do not perform a similar direct path load at a destination database, there may be apply errors at that destination database when changes are made to the loaded objects at the source database. Oracle Database 11g: Implement Streams I - 155

156 Oracle Database 11g: Implement Streams I - 156
Commands That Are Not Captured (continued) Online table redefinition using the DBMS_REDEFINITION package is supported for tables that are configured for replication using Oracle Streams as long as you do not use DBMS_REDEFINITION to change the column data types or column names (also referred to as logical redefinition). If you use the DBMS_REDEFINITION package to alter the storage characteristics of the table or to add support for parallel queries (also referred to as physical redefinition), the changes are not replicated by Oracle Streams to the destination sites. For example, although a capture process always ignores FLASHBACK DATABASE statements, a capture process captures FLASHBACK TABLE statements when its rule sets instruct it to capture the DDL changes to the specified table. FLASHBACK TABLE statements include time stamps or SCN values in its syntax. When a capture process captures a DDL change that specifies time stamps or system change number (SCN) values in its syntax, you must configure a DDL handler for any apply processes that will dequeue the change. The DDL handler must process time stamp or SCN values properly. Oracle Database 11g: Implement Streams I - 156

157 DDL That Is Not Supported
DDL that is captured but not supported: CREATE CONTROL FILE CREATE or ALTER DATABASE CREATE, ALTER, or DROP MATERIALIZED VIEW LOG CREATE, ALTER, or DROP MATERIALIZED VIEW CREATE, ALTER, or DROP SUMMARY CREATE SCHEMA CREATE PFILE CREATE SPFILE RENAME (Use ALTER TABLE instead.) DDL That Is Not Supported Some commands can be captured from the redo and enqueued, but are not supported by an apply process. If an apply process receives a DDL LCR that specifies an operation that cannot be applied, the apply process ignores the DDL LCR and records the following message in the apply process trace file, followed by the DDL text that is ignored: Apply process ignored the following DDL: An apply process applies all other types of DDL changes if the DDL LCRs containing the changes satisfy the rules in the apply process rule set. The ability to rename a table is supported with the ALTER TABLE … RENAME clause. Oracle Database 11g: Implement Streams I - 157

158 Viewing Objects That Are Not Supported By Capture
SELECT * FROM DBA_STREAMS_UNSUPPORTED WHERE owner IN ('OE', 'PM', 'SH', 'IX'); OWNER TABLE_NAME REASON AUTO OE CUSTOMERS column with user-defined type NO IX AQ$_ORDERS_QUEUETABLE_S AQ queue table NO PM TEXTDOCS_NESTEDTAB SH DR$SUP_TEXT_IDX$K unsupported column exists YES Viewing Objects That Are Not Supported By Capture You can easily determine whether a table is supported for replication with Oracle Streams by querying the DBA_STREAMS_UNSUPPORTED data dictionary view. This view displays the tables in the database that are not supported by Streams in Oracle Database 11g. For example: SELECT table_name, reason, auto_filtered FROM DBA_STREAMS_UNSUPPORTED WHERE owner = 'SH'; TABLE_NAME REASON AUT DR$SUP_TEXT_IDX$K unsupported column exists YES MVIEW$_EXCEPTIONS unsupported column exists NO SALES_TRANSACTIONS_EXT external table NO Certain unsupported commands are automatically filtered out by Oracle Streams, such as DDL statements for materialized views and materialized view logs. The auto_filter column indicates whether or not Oracle Streams is automatically filtering out the changes. For example: SELECT * FROM DBA_STREAMS_UNSUPPORTED WHERE owner='HR'; OWNER TABLE_NAME REASON AUTO HR MLOG$_EMPLOYEES materialized view log YES Oracle Database 11g: Implement Streams I - 158

159 Limiting Captured Messages
You can use rules to limit what is captured: Only DML rules or only DDL rules Negative rules Subset rules Complex rules Rules based on Streams tags Limiting Captured Messages Streams enables you to control which information to share and where to share it using rules. The rule condition combines one or more expressions and conditions, and returns a Boolean value, which is a value of TRUE, FALSE, or NULL (unknown), based on a message. When you specify rules for a capture process, and if a rule evaluates to TRUE in a positive rule set, the capture process captures the information. If a rule in a negative rule set evaluates to TRUE, the capture process does not capture the information. You can create table-level rules only for those tables that you want to replicate. Changes for other tables, even those in the same schema, will not be captured as long as there is no rule that evaluates to TRUE for those tables. If you use a higher-level rule, such as a schema or global rule, changes for a greater number of tables are captured. You can use subset rules or the and_condition parameter when creating a rule to further limit the changes that are captured. You can also use the Streams tags to identify changes from an application or a specific user, and then use the and_condition parameter to create rules for the capture process that include or exclude these changes. Oracle Database 11g: Implement Streams I - 159

160 Oracle Database 11g: Implement Streams I - 160
Streams Tags Every redo entry in the redo log has a tag associated with it. You can use tags to identify the source of the generated redo. Tags become part of the LCRs that are created by a capture process. tag => HEXTORAW('001'); LGWR tag => NULL; Streams Tags You can use a tag to determine whether a redo entry or an LCR contains a change that originated in the local database or at a different database, so that you avoid sending LCRs back to the database where they originated. The tag value in the redo entry for a change may determine whether or not the change is captured. If a change is captured, the tag becomes part of the LCR that is created by the capture process. Similarly, after a tag is part of an LCR, the value of the tag may determine whether a propagation job propagates the LCR and whether an apply process applies the LCR. The behavior of a transformation, DML handler, or error handler can also depend on the value of the tag. The data type of the tag is RAW. A null tag consumes no space in the redo entry. The size limit for a tag value is 2,000 bytes. The HEXTORAW conversion function is necessary when comparing a tag value within rules to optimize rule evaluation. Oracle Database 11g: Implement Streams I - 160

161 Oracle Database 11g: Implement Streams I - 161
Streams Tags By default, when a user or application generates redo entries, the tag is NULL for each redo entry. Redo entries generated by an apply process other than MAINTAIN _* procedures use a default tag value of “00” (double zero). You can specify non-NULL tags for redo entries that are generated by a certain session or by an apply process. Streams Tags (continued) The ADD_TABLE_RULES, ADD_SCHEMA_RULES, and ADD_GLOBAL_RULES procedures set the default for include_tagged_lcr to FALSE. This means that the changes created by an apply process that have a non-NULL Streams tag are not captured. The MAINTAIN_TABLES, MAINTAIN_SCHEMAS, and MAINTAIN_GLOBAL procedures set this parameter to TRUE to enable hub-and-spoke configuration. When using the MAINTAIN_* procedures, the tag is set based on a generated object ID for apply. If you set INCLUDE_TAGGED_LCR to TRUE for the capture process, and if an apply process is running on the same database as a capture process, the capture process captures the changes made by the apply process. To avoid having a capture process capture changes that originated from a particular source database, make sure to use the SOURCE_DATABASE parameter when creating the capture rule. Propagation rules are configured to ensure that a change is not sent back to its source database. Oracle Database 11g: Implement Streams I - 161

162 Altering the Value of Streams Tags
You can control the value of the tags that are generated in the redo log in the following ways: Use the DBMS_STREAMS.SET_TAG procedure to specify the value of the redo tags that are generated in the current session. Use the SET_TAG member procedure for an existing LCR. Use the CREATE_APPLY or ALTER_APPLY procedure in the DBMS_APPLY_ADM package to control the value of the redo tags that are generated when an apply process runs. Altering the Value of Streams Tags When using the DBMS_STREAMS.SET_TAG procedure, if a database change is made in the current session, the tag becomes part of the redo entry that records the change. Different sessions can have the same tag setting or different tag settings. To modify the tag for an LCR, you can use the SET_TAG member procedure. If the CREATE_APPLY or ALTER_APPLY procedure in the DBMS_APPLY_ADM package is used to control the value of the redo tags that are generated by the apply process, all sessions coordinated by the apply process coordinator will use this tag setting. If you modify the value of the Streams tag for an application or user session, you can create a rule for the capture process to capture (or discard) these changes. To specify that changes with a specific Streams tag must be captured, when creating the rules for capture, use the and_condition parameter as follows: and_condition => ':lcr.get_tag() = HEXTORAW(''<tag>'')' Note that <tag> is the value that you set for the Streams tag. For changes that are applied to the local database (where the capture is running) from other databases: You use the TAG (rather than the SOURCE_DATABASE parameter) to filter out those changes. If you use and_condition for a rule in the positive rule set, only those changes that contain the specified Streams tag are captured. If you specify and_condition for a rule in the negative rule set, changes that contain the specified Streams tag are discarded. Oracle Database 11g: Implement Streams I - 162

163 Capture Process Architecture
Instance SGA Streams pool Shared pool Java pool Large pool LGWR Cpnn Capture Process Architecture Capture is implemented as a background process named CPnn. A capture process captures changes from the redo log by using the infrastructure of LogMiner. Streams automatically configures LogMiner. When you create a local capture process, the DBMS_CAPTURE_ADM.BUILD procedure is run automatically to extract (or build) the data dictionary to the redo log. The first time a capture process is started at the database, it uses the extracted data dictionary information in the redo log to create a LogMiner data dictionary, which is separate from the primary data dictionary for the source database. Additional capture processes may use this LogMiner data dictionary, or they may create new Streams data dictionaries. The memory required by persistent sessions, such as those used by Oracle Streams SQL Apply, is automatically allocated out of the Streams pool if the Streams pool is configured. If it is not configured, the persistent session allocates memory from the shared pool. Each capture process requires at least 30 MB of contiguous memory for use by a session, and an additional 10 MB for each degree of parallelism when the capture process is configured with parallelism greater than one. Oracle Database 11g: Implement Streams I - 163

164 Oracle Database 11g: Implement Streams I - 164
Summary In this lesson, you should have learned how to: Describe the function of the capture process List the types of changes that are captured Describe the capture-process architecture Describe Streams tags and the ways in which they are used Oracle Database 11g: Implement Streams I - 164

165 Configuring a Capture Process

166 Oracle Database 11g: Implement Streams I - 166
Objectives After completing this lesson, you should be able to: Configure a capture process Manage a capture process Monitor a capture process Oracle Database 11g: Implement Streams I - 166

167 Creating the Capture Process
Automatically create a capture process by using the DBMS_STREAMS_ADM package: ADD_TABLE_RULES ADD_SCHEMA_RULES ADD_GLOBAL_RULES ADD_SUBSET_RULES MAINTAIN_* procedures Manually create a capture process by calling the CREATE_CAPTURE procedure of DBMS_CAPTURE_ADM. Creation of the first capture process in a database may take time because the data dictionary is being copied to the redo logs. Creating the Capture Process You can use any of the following procedures to create a capture process: DBMS_STREAMS_ADM.ADD_TABLE_RULES DBMS_STREAMS_ADM.ADD_SCHEMA_RULES DBMS_STREAMS_ADM.ADD_GLOBAL_RULES DBMS_STREAMS_ADM.ADD_SUBSET_RULES DBMS_STREAMS_ADM.MAINTAIN_* DBMS_CAPTURE_ADM.CREATE_CAPTURE Each of the procedures in the DBMS_STREAMS_ADM package: Creates a capture process with the specified name if it does not exist Creates a rule set for the capture process if the capture process does not have a rule set. The rule set can be either positive or negative. Adds a table, schema, or global rules or subset rules to the rule sets associated with the capture process Configures supplemental logging of key columns for the tables The CREATE_CAPTURE procedure of the DBMS_CAPTURE_ADM package creates a capture process but does not create a rule or rule set for the capture process. However, by using the CREATE_CAPTURE procedure, you can specify an existing rule set to associate with the capture process and a start system change number (SCN) for the capture process. Oracle Database 11g: Implement Streams I - 167

168 Oracle Database 11g: Implement Streams I - 168
Creating the Capture Process (continued) Because the data dictionary is duplicated when the first capture process is created, it may take some time to create the first capture process for a database. The amount of time that is required depends on the number of database objects in the database. When you create a new capture process in an instance that already has at least one capture process by default, the new capture process attempts to share one of the existing LogMiner data dictionaries with one or more other capture processes. When this new capture process is started, it may need to scan the redo log files with a FIRST_CHANGE# value that is lower than the start SCN. To prevent the new capture process from requesting very old archived log files, you can use the DBMS_CAPTURE_ADM.BUILD procedure to extract data dictionary information to the redo logs, and to create a new valid first SCN value that can be used by the new capture process. If you then specify this SCN for the first_scn parameter when creating the new capture process, it creates a new LogMiner data dictionary when the new capture process is started for the first time. Note: Before you create a capture process, you must have completed the following steps: Configured the database initialization parameters Enabled archive logging Created a Streams queue to associate with the capture process, if one does not exist Automatically created the Streams queues using the Maintain scripts Oracle Database 11g: Implement Streams I - 168

169 Creating the Capture Process
DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name IN VARCHAR2, streams_type IN VARCHAR2, streams_name IN VARCHAR2 DEFAULT NULL, queue_name IN VARCHAR2 DEFAULT 'streams_queue', include_dml IN BOOLEAN DEFAULT true, include_ddl IN BOOLEAN DEFAULT false, include_tagged_lcr IN BOOLEAN DEFAULT false, source_database IN VARCHAR2 DEFAULT NULL, dml_rule_name OUT VARCHAR2, ddl_rule_name OUT VARCHAR2, inclusion_rule IN BOOLEAN DEFAULT true, and_condition IN VARCHAR2 DEFAULT NULL); Creating the Capture Process (continued) You can use the ADD_TABLE_RULES procedure to add capture or apply rules for a table. One version of this procedure contains two OUT parameters, the other does not. The parameters are: Table_name: The name of the table specified as [schema_name.]object_name streams_type: The type of process values can be CAPTURE, SYNC_CAPTURE, or APPLY streams_name: The label of the process. If the specified process does not exist, it is created automatically. queue_name: The name of the local queue into which the changes are enqueued include_dml: Whether or not a data manipulation language (DML) rule for capturing DML changes should be created include_ddl: Whether or not a data definition language (DDL) rule for capturing DDL changes should be created include_tagged_lcr: Whether or not a redo entry is considered for capture if it has a non-NULL tag. If FALSE, a redo entry is considered for capture only when the redo entry or the LCR contains a NULL tag. Source database: The global name of the source database dml_rule_name: Contains the DML rule name if include_dml is TRUE ddl_rule_name: Contains the DDL rule name if include_ddl is TRUE Oracle Database 11g: Implement Streams I - 169

170 Oracle Database 11g: Implement Streams I - 170
Creating the Capture Process (continued) inclusion_rule: If TRUE, the rule is a positive rule; if FALSE, the rule is a negative rule. and_condition: If non-NULL, this appends the specified condition to the system-created rule condition by using an AND clause. The ADD_SCHEMA_RULES procedure adds capture or apply rules for a schema. One version of this procedure contains two OUT parameters; the other does not. DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name IN VARCHAR2, streams_type IN VARCHAR2, streams_name IN VARCHAR2 DEFAULT NULL, queue_name IN VARCHAR2 DEFAULT 'streams_queue', include_dml IN BOOLEAN DEFAULT true, include_ddl IN BOOLEAN DEFAULT false, include_tagged_lcr IN BOOLEAN DEFAULT false, source_database IN VARCHAR2 DEFAULT NULL, dml_rule_name OUT VARCHAR2, ddl_rule_name OUT VARCHAR2 inclusion_rule IN BOOLEAN DEFAULT true, and_condition IN VARCHAR2 DEFAULT NULL); The ADD_GLOBAL_RULES procedure adds either capture rules for an entire database, or apply or messaging client rules for all LCRs in a queue. One version of this procedure contains two OUT parameters; the other does not. DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( include_tagged_lcr IN BOOLEAN DEFAULT false, The ADD_SUBSET_RULES procedure creates table subset rules for capture, apply, or a messaging client. One version of this procedure contains two OUT parameters; the other does not DBMS_STREAMS_ADM.ADD_SUBSET_RULES( table_name IN VARCHAR2, dml_condition IN VARCHAR2, streams_type IN VARCHAR2 DEFAULT 'apply', source_database IN VARCHAR2 DEFAULT NULL, insert_rule_name OUT VARCHAR2, update_rule_name OUT VARCHAR2, delete_rule_name OUT VARCHAR2); Oracle Database 11g: Implement Streams I - 170

171 Creating the Capture Process
Example of creating a capture process automatically: BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES ( table_name => 'HR.LOCATIONS', streams_type => 'capture', streams_name => 'capture1', queue_name => 'streams_queue', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, inclusion_rule => TRUE ); END; / Rule Capture Creating the Capture Process (continued) When it is run, the procedure shown in the slide performs the following actions: Calls the DBMS_CAPTURE_ADM.BUILD procedure to dump the data dictionary into the redo logs Creates a capture process named capture1 if it does not exist Associates the capture process with an existing queue named streams_queue Creates a rule set and associates it with the capture process as a positive rule set, if one does not exist. The rule set has a system-generated name. Creates a positive rule that specifies that DML changes to the HR.LOCATIONS table should be enqueued into the staging area. The rule name is specified by the system. Adds the rule to the positive rule set of the capture process Adds the IS_NULL_TAG = 'Y' condition to the system-created rule. Effectively, this causes the capture process to handle only those redo entries that have a NULL tag. If INCLUDE_TAGGED_LCR is set to TRUE, the IS_NULL_TAG = 'Y' clause is not included in the rule condition. In this case, all redo entries (tagged or not) that satisfy the basic rule condition are processed by the capture process and enqueued into the staging area. Prepares the table for instantiation. This concept is covered in the lesson titled “Instantiation.” Oracle Database 11g: Implement Streams I - 171

172 Oracle Database 11g: Implement Streams I - 172
Creating the Capture Process (continued) Enables supplemental logging for the primary key, unique key, foreign key, and bitmap index columns in the tables prepared for instantiation. The primary key columns are unconditionally logged. The unique key, foreign key, and bitmap index columns are conditionally logged. If the capture process is a downstream capture process that does not use a database link to the source database, you must prepare the appropriate objects for instantiation and specify the necessary supplemental logging manually at the source database. Oracle Database 11g: Implement Streams I - 172

173 Detailed Capture Management
The DBMS_CAPTURE_ADM package includes the following procedures for managing the capture process: CREATE_CAPTURE SET_PARAMETER START_CAPTURE ALTER_CAPTURE INCLUDE_EXTRA_ATTRIBUTE STOP_CAPTURE DROP_CAPTURE BUILD Source database Database object Messages in queue Detailed Capture Management Use the CREATE_CAPTURE procedure to create capture processes, which are named CP00 …CPnn (n is a digit or character). With the SET_PARAMETER procedure, you can customize the operation of the capture process, such as maximum_scn. Use the START_CAPTURE procedure to start an existing capture process. With the ALTER_CAPTURE procedure, you can manage the rule sets for a capture process, change the start or first SCN for a capture process, configure a database link for a downstream capture process, or configure a capture user. The INCLUDE_EXTRA_ATTRIBUTES procedure instructs the capture process to include or exclude extra attributes in the LCR. Use the STOP_CAPTURE procedure to stop the capture process. Use the DROP_CAPTURE procedure to drop an existing capture process and (optionally) drop the rules that are associated with it. The BUILD procedure extracts the data dictionary of the current database to the redo logs and automatically specifies database supplemental logging for all primary key and unique key columns. CAPTURE Redo logs Oracle Database 11g: Implement Streams I - 173

174 Manually Creating the Capture Process
Example of creating a capture process by using the DBMS_CAPTURE_ADM package: BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE ( queue_name => 'hr_queue', capture_name => 'hr_capture', rule_set_name =>'hr.hr_4_RS', negative_rule_set_name => NULL, capture_user => 'STRMADMIN'); END; / Manually Creating the Capture Process When it is run, the procedure shown in the slide performs the following actions: Creates a capture process named hr_capture. However, a capture process with the same name must not exist. Associates the capture process with the queue named hr_queue Configures the existing rule set named hr_4_rs as the positive rule set for the capture process Does not configure a negative rule set for the capture process Grants the STRMADMIN enqueue privilege on the queue that is used by the capture process and configures that user as a secure queue user of the queue To specify a capture user, the user calling the CREATE_CAPTURE procedure must have the DBA role. The user specified as the capture user must have the necessary privileges to capture the changes. When manually creating a capture process, you can also: Configure the capture process as a downstream capture process Specify a first SCN and start SCN for the capture process Oracle Database 11g: Implement Streams I - 174

175 Setting Capture Process Parameters
Use the DBMS_CAPTURE_ADM package to: Alter the capture process configuration Set capture process parameters that control the way a capture process operates BEGIN DBMS_CAPTURE_ADM.SET_PARAMETER ( capture_name => 'capture1', parameter => 'parallelism', value => '1'); END; / Setting Capture Process Parameters The DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure is used to modify the configuration of a capture process. This procedure is covered in more detail in the lesson titled “Administering a Streams Environment.” The example in the slide sets the PARALLELISM parameter for the capture process CAPTURE1 to 1 using the DBMS_CAPTURE_ADM.SET_PARAMETER procedure. The value parameter of SET_PARAMETER is always entered as a VARCHAR2, even if the parameter value is a number. Altering the PARALLELISM parameter automatically stops and restarts a capture process. Note: Setting parallelism on capture increases the memory requirements of the capture process by 10 MB for each degree of parallelism. The best practices for Streams capture is to use the default value for parallelism (1). Oracle Database 11g: Implement Streams I - 175

176 Capture Process Parameters
DISABLE_ON_LIMIT DOWNSTREAM_REAL_TIME_MINE MAXIMUM_SCN MESSAGE_LIMIT PARALLELISM STARTUP_SECONDS TIME_LIMIT TRACE_LEVEL WRITE_ALERT_LOG Capture Process Parameters The DISABLE_ON_LIMIT parameter specifies whether or not the capture process must be disabled if it reaches the value specified by the TIME_LIMIT or MESSAGE_LIMIT parameter. If this parameter is set to N, the capture process is restarted automatically after reaching a limit. The default value is N. To configure real-time mining for a capture process, you must set the DOWNSTREAM_REAL_TIME_MINE capture process parameter to 'Y' . After setting this parameter, switch the log files at the source database using the ALTER SYSTEM ARCHIVE LOG CURRENT SQL statement to begin the real-time downstream capture. If you set this parameter to 'Y', you must have configured redo log transmission by the LGWR process at the source database and the standby redo logs at the downstream database. There can be only one real-time downstream capture process at any downstream database. The MAXIMUM_SCN parameter indicates that the capture process must be disabled before capturing a change record with an SCN greater than or equal to the value that is specified. If the value is infinite, the capture process runs regardless of the SCN value. The default value is infinite. You can use the MESSAGE_LIMIT parameter to specify the number of messages that the capture process must capture before shutting down automatically. The default value is infinite. Oracle Database 11g: Implement Streams I - 176

177 Oracle Database 11g: Implement Streams I - 177
Capture Process Parameters (continued) The PARALLELISM parameter specifies the number of parallel execution servers that are used to concurrently mine redo logs. The default value is 1. When you alter this parameter, the capture process is stopped and restarted automatically. Note: Setting the PARALLELISM parameter to a number greater than the number of available parallel execution servers may disable the capture process. Make sure that the PROCESSES and PARALLEL_MAX_SERVERS initialization parameters are set appropriately when you set this parameter. STARTUP_SECONDS indicates the maximum number of seconds to wait for another instantiation of the same capture process to finish. This parameter is useful only if you are manually starting the capture process. If the other instantiation of the same capture process does not finish within this time, the capture process does not start. If you set it to infinite, the capture process does not start until another instantiation of the same capture process finishes. The default value is 0. The TIME_LIMIT parameter specifies a limit for the length of time, measured in seconds, that a capture process can run before shutting down automatically. The default value is infinite. You must use the TRACE_LEVEL parameter only under guidance from Oracle Support Services. If you set WRITE_ALERT_LOG to Y, the capture process writes a message to the alert log when the capture process stops indicating the reason for the process exit. If it is set to N, the capture process does not write a message to the alert log on exit. The default value is Y. In Oracle Database 11g, there is a new message_tracking_frequency parameter that takes a value of 0 or a positive integer up to The frequency at which messages are captured by the capture process is tracked automatically. For example, if this parameter is set to the default value of , every two-millionth message is tracked automatically. The tracking label used for automatic message tracking is capture_process_name: AUTOTRACK, where capture_process_name is the name of the capture process. Only the first 20 bytes of the capture process name are used; the rest is truncated if it exceeds 20 bytes. If 0 (zero), the messages are not tracked automatically. Oracle Database 11g: Implement Streams I - 177

178 Starting the Capture Process
When you create a capture process, it does not start automatically. This allows the DBA to set the capture parameters appropriately before the capture starts. You must not start capture until all apply processes and propagation jobs have been configured and started. To start the capture process, use the DBMS_CAPTURE_ADM.START_CAPTURE procedure. EXEC DBMS_CAPTURE_ADM.START_CAPTURE('capture1'); Starting the Capture Process Before starting the capture processes and configuring the propagation jobs in a new Streams environment, ensure that any propagation jobs or apply processes that will receive messages are configured to handle them. That is, the propagation jobs or apply processes should exist, and each one should be associated with a rule set that handles the messages appropriately. If these propagation jobs and apply processes are not configured properly to handle these messages, the change messages may be lost. The status of the capture process is persistent: If the status is ENABLED, the capture process is started upon database instance startup. If the status is ABORTED or DISABLED, the capture process is not started upon database instance startup. Oracle Database 11g: Implement Streams I - 178

179 Modifying the Capture Process
The ALTER_CAPTURE procedure is used to: Specify or remove a positive or negative rule set Specify a capture user Change the start SCN or first SCN Change the duration for checkpoint information retention Alter downstream capture to use a database link Modifying the Capture Process With the ALTER_CAPTURE procedure, you can manage the rule sets for a capture process, change the start or first SCN for a capture process, configure a database link for a downstream capture process, or configure a capture user. You can modify the start SCN for an existing capture process with the start_scn parameter. The SCN value that is specified for the start SCN must be greater than or equal to the first SCN for the capture process. You can alter the start SCN for a capture process to an earlier SCN to recapture changes from the redo logs when performing recovery operations, but you can only go back as far as the SCN indicated by the first SCN parameter. Increasing the first_scn for a capture process helps reduce the memory requirements for that capture process by deleting the unnecessary capture process–checkpoint information. Raising the first_scn limits how far back you can set the start_scn. This reduces your ability to recover previously captured changes, but improves the time it takes to start a capture process. To adjust the first_scn automatically, you can use the ALTER_CAPTURE procedure to set checkpoint_retention_time to the number of days that a capture process should retain checkpoints before automatically purging them. You can also specify partial days by using decimals. For example, eight and a half days would be 8.5. Oracle Database 11g: Implement Streams I - 179

180 Oracle Database 11g: Implement Streams I - 180
Modifying the Capture Process (continued) When a checkpoint is purged based on the setting of checkpoint_retention_time, the LogMiner data dictionary information for the archived redo log file that corresponds to the checkpoint is purged, and the first_scn of the capture process is reset to the SCN value corresponding to the first change in the next archived redo log file Oracle Database 11g: Implement Streams I - 180

181 Modifying the Capture Process
Example: Altering the HR_CAPTURE capture process to remove the HR_4_RS positive rule set and use the EMPNEG_RS instead BEGIN DBMS_CAPTURE_ADM.ALTER_CAPTURE ( capture_name => 'hr_capture', remove_rule_set => TRUE, negative_rule_set_name => 'HR.EMPNEG_RS'); END; / Modifying the Capture Process When you update the rules that are used by an application, a different rule set may be used to test the functionality before implementing it in a production system. After the new rule set has been tested, modifying the capture process to use the new rule set requires only a simple procedural call, as shown in the slide. In this case, there are two actions being performed: The current positive rule set that is associated with the HR_CAPTURE capture process is disassociated from the capture process. The HR.EMPNEG_RS rule set is assigned as the negative rule set for the capture process. Because this capture process now contains only a negative rule set, it captures all the changes made in the database except the changes that cause a rule in the EMPNEG_RS rule set to evaluate to TRUE. Oracle Database 11g: Implement Streams I - 181

182 Specifying Extra Attributes for Capture
BEGIN DBMS_CAPTURE_ADM.INCLUDE_EXTRA_ATTRIBUTE( capture_name => 'CAPTURE1', attribute_name => 'username‘, include => TRUE);); attribute_name => 'tx_name', include => FALSE); END; / Specifying Extra Attributes for Capture You can include the following attributes for a capture process: row_id: The original row ID in a DML row LCR (This attribute is not included in DDL LCRs or in row LCRs for index-organized tables.) serial#: The serial number of the session that performed the change captured in the LCR session#: The identifier of the session that performed the change captured in the LCR thread#: The thread number of the instance in which the change captured in the LCR was performed tx_name: The name of the transaction that includes the LCR username: The name of the user who performed the change captured in the LCR The example in the slide instructs the CAPTURE1 capture process to include the username and transaction name in all LCRs it creates. If you want to exclude an extra attribute that is currently being captured by a capture process, specify the attribute and specify FALSE for the include parameter. The default value of the include parameter is TRUE. Oracle Database 11g: Implement Streams I - 182

183 Stopping and Dropping the Capture Process
Stopping and dropping a Streams capture process: STOP_CAPTURE DROP_CAPTURE Stopping and Dropping the Capture Process To stop a capture process execute the STOP_CAPTURE procedure. This procedure has the following parameters: CAPTURE_NAME: The name of the capture process. A NULL setting is not allowed. Do not specify an owner. FORCE: This parameter is reserved for future use. Because the status of the capture process is persistent, a stopped process (with the status of DISABLED or ABORTED) is not automatically started upon database instance startup. Execute the DROP_CAPTURE procedure to drop a capture process. The parameters for this procedure are: CAPTURE_NAME: The name of the capture process that is being dropped. Specify an existing capture process name. Do not specify an owner. DROP_UNUSED_RULE_SETS: If TRUE, the procedure drops any rule sets, positive and negative, used by the specified capture process if these rule sets are not used by any other Oracle Streams client. Oracle Streams clients include capture processes, propagations, apply processes, and messaging clients. If this procedure drops a rule set, it also drops any rules in the rule set that are not in another rule set. If FALSE, the procedure does not drop the rule sets used by the specified capture process, and the rule sets retain their rules. Oracle Database 11g: Implement Streams I - 183

184 Monitoring Capture Processes
To view information about the capture process: DBA_CAPTURE DBA_CAPTURE_PARAMETERS DBA_CAPTURE_EXTRA_ATTRIBUTES V$STREAMS_CAPTURE V$STREAMS_TRANSACTION DBA_STREAMS_RULES To view information about the archived redo logs that are used by the capture process: DBA_REGISTERED_ARCHIVED_LOG DBA_LOGMNR_PURGED_LOG Monitoring Capture Processes DBA_CAPTURE displays information about all capture processes in the database, such as the capture name, the queue owner and name, the rule set owners and names, the start SCN, the first SCN and the checkpoint SCNs for the capture process, the current status of the capture process, and configuration information for a downstream capture process. DBA_CAPTURE_PARAMETERS displays the parameter settings for the current capture process. DBA_CAPTURE_EXTRA_ATTRIBUTES displays the extra attributes included in the LCRs captured by each capture process in the local database. V$STREAMS_CAPTURE can be queried for information, such as rule evaluation statistics, the message enqueuing latency, the redo log scanning latency, the time at which the redo was generated for the most recently captured change, and the SCN of the last redo entry available for the capture process. V$STREAMS_TRANSACTION provides information about the transactions that are being processed by a Streams capture process or apply process. This view can be used to identify long-running transactions and to determine how many LCRs are being processed in each transaction. DBA_STREAMS_RULES displays information about the rules used by capture processes. Oracle Database 11g: Implement Streams I - 184

185 Oracle Database 11g: Implement Streams I - 185
Monitoring Capture Processes (continued) DBA_REGISTERED_ARCHIVED_LOG displays information about the SCN values for the archived redo log files that are registered for each capture process in a database. The DBA_LOGMNR_PURGED_LOG data dictionary view lists the redo log files that are no longer needed by a capture process after you have purged the LogMiner data dictionary. These redo log files may be removed without affecting any existing capture process at the local database. Oracle Database 11g: Implement Streams I - 185

186 Managing the Capture Process
When you click the Capture tab under Streams Management, the Capture administration page appears. On this page, you can: View all capture processes for a database View their associated rule sets and the rules within. Determine which queue each capture process is associated with. View the current state and status of each capture process. View the FIRST_SCN and APPLIED_SCN for each capture process. Determine the type of capture process (local or downstream). View any errors generated by a capture process. Manage the rules for a capture process by clicking the appropriate rule set name link Edit the capture process parameters or the FIRST_SCN setting View the statistics for a capture process Start, stop, or delete a capture process Oracle Database 11g: Implement Streams I - 186

187 Oracle Database 11g: Implement Streams I - 187
Summary In this lesson, you should have learned how to: Configure a capture process Manage a capture process Monitor a capture process Oracle Database 11g: Implement Streams I - 187

188 Practice 6 Overview: Configuring the Capture Process
This practice covers the following topics: Creating a capture process by using the ADD_TABLE_RULES procedure Viewing capture process information in the data dictionary Querying the rules and rule conditions that are used by the capture process Adding new rules to the capture process rule set with ADD_TABLE_RULES Oracle Database 11g: Implement Streams I - 188

189 Result of Practice 6: Configuring the Capture Process
AMER source database Redo logs Q_CAPTURE OE.ORDERS OE.ORDER_ITEMS STRM01_CAPTURE Oracle Database 11g: Implement Streams I - 189

190 Instantiation

191 Oracle Database 11g: Implement Streams I - 191
Objectives After completing this lesson, you should be able to: Describe the process of instantiation Prepare tables for instantiation Perform instantiation automatically using: Data Pump Export and Import Set the instantiation system change number (SCN) manually Oracle Database 11g: Implement Streams I - 191

192 Oracle Database 11g: Implement Streams I - 192
What Is Instantiation? Instantiation of database objects for Oracle Streams requires multiple steps: Creating a shared database object at all destination sites from the database object at the source site Populating the Streams data dictionary with metadata Setting the instantiation SCN for shared objects at each destination site Source site ABC table SCN: SCN: ABC table Destination site What Is Instantiation? In a Streams environment that shares a database object within a single database or between multiple databases, a source database is where changes to the object are generated in the redo log. To instantiate an object means to create the object physically at a destination database based on an object at a source database. If the object is a table, the objects at the source database and the destination database need not match. However, if some or all table data is replicated between the two databases, the data that is replicated should be consistent when the table is instantiated. If the tables exist at the destination database, it does not mean that they are instantiated. These objects must still have metadata information added to the Streams data dictionary at each site and have their instantiation SCN specified. To instantiate shared tables, you can perform a metadata import to set the instantiation SCN. The instantiation SCN instructs the apply process to apply events that were committed after a specific SCN for each source database table. Instantiation is required whenever an apply process dequeues captured logical change records (LCRs), even if the apply process sends the LCRs to an apply handler that does not execute them. Oracle Database 11g: Implement Streams I - 192

193 Performing Instantiation
Prepare the objects for instantiation at the source site. Create a copy of the objects at the destination site (if necessary). Set the instantiation SCN for the objects at the destination site. Performing Instantiation When you prepare an object for instantiation, the Streams data dictionary for the relevant capture processes, propagations, and apply processes are populated asynchronously for both the local dictionary and all remote dictionaries with information pertaining to the objects that were prepared for instantiation. If you use one of the following methods to instantiate the replicated objects, you can create the copies and set the instantiation SCNs in a single step: Data Pump export and import Transportable Tablespaces You can also use Recovery Manager (RMAN) to instantiate an entire database, or you can manually configure the instantiation SCNs using the DBMS_APPLY_ADM.SET_*_INSTANTIATION_SCN procedures. Note: In Oracle Database 11g and later, it is recommended to use Oracle Data Pump instead of the Import and Export utility. Oracle Database 11g: Implement Streams I - 193

194 Preparing for Instantiation
Is automatically performed when the DBMS_STREAMS_ADM package is used for configuring capture rules Can be performed manually by calling DBMS_CAPTURE_ADM procedures: PREPARE_TABLE_INSTANTIATION PREPARE_SCHEMA_INSTANTIATION PREPARE_GLOBAL_INSTANTIATION You must always prepare shared objects for instantiation. Preparing for Instantiation You must always prepare an object for instantiation at the source database before it is instantiated at the destination database. When you prepare an object for instantiation, the procedure records the lowest SCN of each object for instantiation. This is the earliest SCN for which changes could be applied to the object at the destination database. You can use the subsequent SCNs for instantiating the object at the destination site. When you use the DBMS_STREAMS_ADM package to add the capture rules, it automatically runs the corresponding PREPARE_*_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package for the specified table, specified schema, or database. Supplemental logging for key columns is configured as a result of running the prepare_*_instantiation procedures. The procedure that prepares for instantiation adds information to the redo log at the source database. The local Streams data dictionary is populated with the information about the object when a capture process captures these redo entries, and any remote Streams data dictionaries are populated when the information is propagated to them. Oracle Database 11g: Implement Streams I - 194

195 Preparing for Instantiation
You must also prepare objects for instantiation if: One or more rules pertaining to the object are added to the positive rule set for capture, propagation, or apply One or more rules pertaining to the object are removed from the negative rule set for capture, propagation, or apply One or more rule conditions pertaining to the object are modified in the positive or negative rule set for capture, propagation, or apply Preparing for Instantiation (continued) Whenever you add (or modify the condition of) a capture, propagation, or apply process rule for an object that is in a positive rule set, you must run the appropriate procedure to prepare the object for instantiation at the source database if the changes affect which objects are captured, propagated, or applied. Similarly, whenever you remove (or modify the condition of) a capture, propagation, or apply process rule for an object that is in a negative rule set, you must run the appropriate procedure to prepare the object for instantiation at the source database if the changes affect which objects are captured, propagated, or applied. You prepare database objects for instantiation at a source database to populate a relevant Streams data dictionary (either locally or on a destination database) that requires information about the source object, even if the object exists at a remote database where the rules were added. Oracle Database 11g: Implement Streams I - 195

196 Oracle Database 11g: Implement Streams I - 196
Instantiation SCN Must be set for each database object to which the captured messages are applied Instructs the apply process to apply messages that committed after a specific SCN for each source database table May be different for each source database if there are multiple source databases for a shared object AP01 AP02 Instantiation SCN In a Streams environment that shares information between multiple databases, a source database is where changes are generated in the redo log. If the destination database contains some or all tables for which the messages will be captured, propagated, and applied, these tables are not instantiated. There is no need to create them again at the destination database by exporting them to the source database, and then importing them to the destination database. Instead, the apply process at the destination database must be instructed explicitly to apply the messages that committed after a specific SCN for each source database table. The instantiation SCN for the table specifies this SCN. The instantiation SCN for a database object determines which LCRs for a database object are ignored by an apply process. If the commit SCN of an LCR for a database object from a source database is less than or equal to the instantiation SCN for that database object at a destination database, the apply process at the destination database discards the LCR, because the instantiation SCN indicates that the destination table contains the change. If the commit SCN of an LCR is greater than the instantiation SCN, the apply process applies the LCR if it satisfies the rules in the apply process rule sets. If there are multiple source databases for a shared database object at a destination database, an instantiation SCN must be set for each source database, and a different instantiation SCN for each source database may be used for the same shared database object. SCN_1 SCN_2 Oracle Database 11g: Implement Streams I - 196

197 Setting the Instantiation SCN
There are multiple methods for instantiating objects and setting the instantiation SCN: Data Pump export and import RMAN Transportable Tablespaces Setting the instantiation SCN for an existing table, schema, or database manually by executing the procedures in the DBMS_APPLY_ADM package at the destination database Setting the Instantiation SCN You can use Oracle Data Pump, Transportable Tablespaces, and the original Export and Import utilities to instantiate individual database objects, schemas, or an entire database. You can use RMAN only to instantiate an entire database. Transportable Tablespaces is usually faster than Export and Import. It is recommended that you use MAINTAIN_SIMPLE_TTS or MAINTAIN_TTS to configure replication for the objects in a tablespace. The MAINTAIN_SIMPLE_TTS and MAINTAIN_TTS procedures automatically create the Streams components and rules, and instantiate all objects in the tablespace. These procedures are covered in the lesson titled “Extending the Streams Environment.” If you use Data Pump or Transportable Tablespaces to perform the instantiation, the instantiation SCN is set automatically. If you use another method of instantiation, you must set the instantiation SCN manually by using the procedures in the DBMS_APPLY_ADM package. In earlier releases, the Export and Import utility can also be used to perform instantiation. Oracle Database 11g: Implement Streams I - 197

198 Setting the Instantiation SCN Manually
The DBA must ensure that the shared objects at the destination database are consistent with the shared objects at the source database at the time of the instantiation SCN. Use the DBMS_APPLY_ADM package at the destination site to set the instantiation SCNs: The recursive parameter sets the instantiation SCN for the specified level and all lower levels. DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN( source_schema_name => 'HR', source_database_name => 'source_db', instantiation_scn => ' ', recursive => TRUE); Setting the Instantiation SCN Manually If a database object exists at a destination database, you can manually set the instantiation SCN for the shared object at the destination site after preparing the source object for instantiation. Use the procedures in the DBMS_APPLY_ADM package: SET_TABLE_INSTANTIATION_SCN SET_SCHEMA_INSTANTIATION_SCN SET_GLOBAL_INSTANTIATION_SCN A recursive argument is available in the SET_SCHEMA_INSTANTIATION_SCN and SET_GLOBAL_INSTANTIATION_SCN procedures. When the recursive argument is set to TRUE and a database link exists from the destination database back to the source database: Executing the SET_SCHEMA_INSTANTIATION_SCN procedure sets the instantiation SCN for the schema and all tables owned by the schema at the source database Executing the SET_GLOBAL_INSTANTIATION_SCN procedure sets the instantiation SCN for the database, and for all schemas and tables owned by the schemas at the source database To use the database link, you must specify the source_db parameter, and the database link must have the same name as the global name of the source database. Oracle Database 11g: Implement Streams I - 198

199 Oracle Database 11g: Implement Streams I - 199
Setting the Instantiation SCN Manually (continued) If a database link does not exist from the destination site to the source site, or if the recursive argument is set to FALSE, you must call the SET_TABLE_INSTANTION_SCN procedure for each table and the SET_SCHEMA_INSTANTIATION_SCN procedure for each schema that is being instantiated at the destination database. The default value of the recursive parameter is FALSE. If the objects exist at the destination site, the recommended method for setting the instantiation SCNs is to use a metadata-only export and import. Note: The database link that is used when the recursive argument is set to TRUE is not related to the database link that is specified for apply_database_link, which is for heterogeneous systems. Oracle Database 11g: Implement Streams I - 199

200 Setting the Instantiation SCN Manually
DECLARE -- Declare variable to hold instantiation SCN iscn NUMBER; BEGIN iscn:=DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); source_object_name => 'HR.EMPLOYEES', source_database_name => 'SITE1.NET', instantiation_scn => iscn); END; / Setting the Instantiation SCN Manually (continued) The example in the slide sets the instantiation SCN for the HR schema and the EMPLOYEES table at the SITE2.NET database to the current SCN on SITE1.NET. The PL/SQL procedure is executed at the source database, SITE1.NET. Because the recursive parameter is not specified (default value is FALSE), the SET_SCHEMA_INSTANTIATION_SCN procedure does not set the instantiation SCN for the tables within the HR schema. Thus, you must call the SET_TABLE_INSTANTIATION_SCN procedure for every table within the HR schema. This is not recommended as the tables cannot be updated while this is done. Oracle Database 11g: Implement Streams I - 200

201 Instantiation Using Data Pump
CREATE DIRECTORY dir1 AS '/home/oracle/dp_dir'; GRANT WRITE ON dir1 TO strmadmin; CREATE DIRECTORY dir2 AS '/home/tom/'; GRANT READ ON dir2 TO strmadmin; expdp DIRECTORY=DIR1 DUMPFILE=jobs.dmp impdp DIRECTORY=DIR2 DUMPFILE=jobs.dmp Instantiation Using Data Pump The Data Pump utility supports Oracle Streams, so you can use this utility when instantiating Streams objects, schemas, or databases. The Data Pump clients, expdp and impdp, invoke the Data Pump Export utility and the Data Pump Import utility, respectively. They provide a user interface that closely resembles the original Export (exp) and Import (imp) utilities. Because Data Pump is server-based, rather than client-based, dump files, log files, and SQL files are accessed relative to server-based directory paths. These directory paths are specified as DIRECTORY objects, which map a name to a directory path on the file system. It is recommended that you use the Data Pump Export and Import utilities because they support all Oracle Database 11g features. Also, the design of Data Pump Export and Import results in greatly enhanced data movement performance over the original Export and Import utilities. To make full use of Data Pump technology, you must have the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles. In addition, the user performing the export and import must have READ or WRITE privileges on the DIRECTORY objects that are used by Data Pump. Oracle Database 11g: Implement Streams I - 201

202 Oracle Database 11g: Implement Streams I - 202
Instantiation Using Data Pump (continued) During export, Oracle Data Pump automatically uses the Oracle Flashback feature to ensure that the exported data and the exported procedural actions for each object are consistent to a single point in time. Using the Data Pump Export utility is sufficient to ensure the degree of consistency required for Streams instantiations. If you are using an export dump file for other purposes in addition to a Streams instantiation, and these other purposes have more stringent consistency requirements than those provided by Data Pump’s default export, you can use the Data Pump Export utility parameters, such as FLASHBACK_SCN or FLASHBACK_TIME for Streams instantiations. For example, if an export includes objects with foreign key constraints, more stringent consistency may be required. During Data Pump import, an instantiation SCN is set at the import database for each database object that was prepared for instantiation at the export database before the Data Pump export was performed. The instantiation SCN settings are based on the metadata obtained during Data Pump export. A Data Pump import session may set its Streams tag to the hexadecimal equivalent of 00 to avoid cycling the changes made by the import. Redo entries resulting from such an import have this tag value and, therefore, by default are not captured by any capture process running on the database where the Data Pump import was run. Whether the import session tag is set to the hexadecimal equivalent of 00 depends on the export that is being imported. Specifically, the import session tag is set to the hexadecimal equivalent of 00 in either of the following cases: The Data Pump export was in FULL or SCHEMA mode. The Data Pump export was in TABLE or TABLESPACE mode and at least one table included in the export was prepared for instantiation at the export database before the export was performed. If neither of these conditions is true for a Data Pump export that is being imported, the import session tag is NULL. The STREAMS_CONFIGURATION Data Pump Import utility parameter specifies whether to import a general Streams metadata that may be present in the export dump file (default is Y). This import parameter is relevant only if you are performing a full database import. You typically specify Y if an import is part of a backup or restore operation. Note: The STREAMS_CONFIGURATION Import utility parameter behaves the same for the original Import utility and the Data Pump Import utility. Oracle Database 11g: Implement Streams I - 202

203 Viewing Information About Instantiation
To determine which database objects have been prepared for instantiation, query the following data dictionary views: DBA_CAPTURE_PREPARED_TABLES DBA_CAPTURE_PREPARED_SCHEMAS DBA_CAPTURE_PREPARED_DATABASE SELECT table_owner, table_name, SCN,timestamp FROM DBA_CAPTURE_PREPARED_TABLES; Viewing Information About Instantiation The DBA_CAPTURE_PREPARED_* views are also available as ALL_CAPTURE_PREPARED_* views: DBA_CAPTURE_PREPARED_TABLES contains the table owner, table name, SCN, and time stamp for all prepared tables. DBA_CAPTURE_PREPARED_SCHEMAS contains the schema name and the time stamp for all prepared schemas. DBA_CAPTURE_PREPARED_DATABASE contains the time stamp at which the database was ready to be instantiated. Oracle Database 11g: Implement Streams I - 203

204 Verifying Instantiations at an Apply Site
To determine the tables for which an instantiation SCN has been set, query the following views: DBA_APPLY_INSTANTIATED_OBJECTS DBA_APPLY_INSTANTIATED_SCHEMAS DBA_APPLY_INSTANTIATED_GLOBAL SELECT source_database, source_schema, instantiation_scn FROM DBA_APPLY_INSTANTIATED_SCHEMAS; SELECT source_database, instantiation_scn FROM DBA_APPLY_INSTANTIATED_GLOBAL; Verifying Instantiations at an Apply Site The DBA_APPLY_INSTANTIATED_OBJECTS view is not available as an ALL_ or USER_ view. This view contains the source database name, source object owner, name and type, instantiation SCN, ignore SCN, and database link to use on apply, if configured for a heterogeneous environment. The DBA_APPLY_INSTANTIATED_SCHEMAS view provides the instantiation-specific SCNs for the instantiated schemas. It contains information about the source database and schema, instantiation SCN, and the name of the database link to use on apply, if configured for a heterogeneous environment. The DBA_APPLY_INSTANTIATED_GLOBAL view provides the instantiation-specific SCNs for the instantiated database. It contains information about the source database, instantiation SCN, and the name of the database link to use on apply, if configured for a heterogeneous environment. Oracle Database 11g: Implement Streams I - 204

205 Removing Instantiation Information
Configured incorrectly Object no longer exists DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN( source_object_name => 'HR.EMPLOYEES', source_database_name => 'SITE1.NET', instantiation_scn => NULL); Removing Instantiation Information If you incorrectly configured the instantiation SCN for an object, or if an instantiation SCN exists for a nonexistent object, you can remove the instantiation SCN by calling the appropriate SET_*_INSTANTIATION_SCN procedure and setting the instantiation SCN to NULL. This removes the entry from the associated dictionary view. The example in the slide removes the instantiation SCN configuration for the EMPLOYEES table in the HR schema at the destination site. Oracle Database 11g: Implement Streams I - 205

206 Oracle Database 11g: Implement Streams I - 206
Summary In this lesson, you should have learned how to: Describe the process of instantiation Prepare tables for instantiation Perform instantiation at the destination database Set the instantiation SCN manually Oracle Database 11g: Implement Streams I - 206

207 Practice 7 Overview: Instantiating Tables
This practice covers the following topics: Using Data Pump to instantiate the OE.ORDERS table Manually setting the instantiation SCN for the OE.ORDER_ITEMS table Oracle Database 11g: Implement Streams I - 207

208 Result of Practices 6 and 7: Instantiating Tables
AMER database EURO database SCN: SCN: OE.ORDERS OE.ORDER_ITEMS Q_CAPTURE OE.ORDERS OE.ORDER_ITEMS STRM01_CAPTURE Redo logs Result of Practices 6 and 7: Instantiating Tables Because Practice 7 builds on Practice 6, you can view the combined result in the graphic in the slide. Oracle Database 11g: Implement Streams I - 208

209 Propagation Concepts and Configuration

210 Oracle Database 11g: Implement Streams I - 210
Objectives After completing this lesson, you should be able to: Describe how messages are propagated Schedule propagation between two queues Propagate captured messages Verify the propagation configuration Oracle Database 11g: Implement Streams I - 210

211 Oracle Database 11g: Implement Streams I - 211
What Is Propagation? Streams uses queues to stage messages for propagation or consumption. Messages propagate from a source queue to a destination queue. Staging areas can receive messages from a queue: In the same database In a remote database Propagation is performed by a propagation job. What Is Propagation? Propagation in Oracle Streams always takes place between two queues: a source queue and a destination queue. You may optionally propagate messages in a Streams queue to other Streams queues in the same database or in remote databases. By using proper transformations, you can also propagate messages in a Streams queue to an Oracle Streams Advanced Queuing (AQ) type queue. Streams uses propagation to send data from one queue to another. The data can be messages, user-enqueued messages, or Streams metadata. Oracle Database 11g: Implement Streams I - 211

212 Oracle Database 11g: Implement Streams I - 212
Directed Networks You can route messages through a series of staging areas before they reach the destination queue. You do not have to apply or dequeue messages at the intermediate staging queues. The intermediate database may propagate messages by using queue forwarding or apply forwarding. Directed Networks To simplify network routing and reduce wide area network (WAN) traffic, you need not send messages to all databases and applications. Rather, you can direct them through staging areas on one or more systems until they reach the final destination. This is referred to as a directed network. You do not need to apply or dequeue messages at each intermediate system, which provides the flexibility in determining the messages that are applied at a particular system. By using Streams, you can choose which messages are propagated to each destination database, and you can specify the route that the messages traverse on their way to a destination database. The advantage of using a directed network is that a source database does not need to have a physical network connection with the destination database. So, if you want messages to propagate from one database to another, but there is no direct network connection between the computers that are running these databases, you can still propagate the messages without reconfiguring your network, as long as one or more intermediate databases connect the source database to the destination database. If you use directed networks and if an intermediate site is unavailable for an extended period of time or is removed, you may need to reconfigure the network environment and the Streams environment. Oracle Database 11g: Implement Streams I - 212

213 Oracle Database 11g: Implement Streams I - 213
Queue Forwarding The intermediate database forwards the message toward its destination. A message that is being forwarded by the intermediate database may or may not be applied at the intermediate database. The source database is the database where the message originates. AP Queue Forwarding Queue forwarding means that the message received from a queue is forward to another queue (usually in an intermediate database). These messages do not have to be applied at the local database. The source database for a message is the database where the message originated. Consider the following characteristics of queue forwarding when you plan your Streams environment: A message is propagated through the data stream without being changed. Thus, each destination site can receive the original transaction. However, you can also define transformations for propagation to modify the message before it reaches its destination. The destination database must have a separate apply process to apply messages from each source database. One or more intermediate databases exist between a source database and a destination database. Oracle Database 11g: Implement Streams I - 213

214 Oracle Database 11g: Implement Streams I - 214
Apply Forwarding Messages are applied and recaptured at an intermediate site before being sent to the destination site. The intermediate database then becomes the source database for the applied messages. Messages may be modified in transit as a result of conflict resolution, apply handlers, or transformations. Apply Forwarding Apply forwarding means that an apply process first processes the messages that are being forwarded at an intermediate database. The messages at the intermediate site are applied, recaptured by a capture process, and then propagated to the destination sites. Consider the following when designing your Streams system for apply forwarding: Messages are applied and recaptured at intermediate databases and may be changed by update conflict handlers, apply handlers, or apply transformations. When a logical change record (LCR) message is applied at a database, redo is generated that may or may not have a tag associated with it. Make sure that the apply process sets an appropriate apply tag and the capture process rules are configured correctly so that the changes can be recaptured. Because messages are recaptured at intermediate databases, the intermediate database becomes the new source database for the message. This may result in fewer apply processes being required at the destination site (which requires a unique apply process for each source queue). You can also use a combination of queue forwarding and apply forwarding in your Streams environment. Oracle Database 11g: Implement Streams I - 214

215 How Does Propagation Work?
A single source queue may propagate to multiple destination queues. A single destination queue may receive messages from multiple source queues. By default, only one propagation job is used to send messages from a source queue to all destination queues at a particular database. Destination queues Queue forwarding How Does Propagation Work? A propagation can be defined in two ways: Queue-to-database link: The propagation is defined by a source queue and database link pair. This is the default. The queue_to_queue parameter is set to FALSE in this case. Queue-to-queue: The propagation is defined by a source queue and destination queue pair. The queue_to_queue parameter is set to TRUE. Although propagation is configured between only two queues, a single queue may participate in many propagations. Consider the following: A single source queue may propagate messages to multiple destination queues. A single destination queue may receive messages from multiple source queues. A single queue may be a destination queue for some propagations and a source queue for other propagations. In Oracle Database 11g, scheduler jobs are used for Streams propagation. A propagation job may be used by multiple propagations. When you have configured queue–to–database link propagation, all destination queues at a database that receive messages from a single source queue do so through a single propagation job. By using a single propagation job for multiple destination queues, communication resources are conserved because messages are not sent more than once to the same database. Because there is only one propagation job, each propagation uses the same schedule. Source queue Oracle Database 11g: Implement Streams I - 215

216 Queue-to-Queue Propagation
Uses an exclusive propagation job to propagate messages from the source queue to the destination queue Can use the same database link as other queue-to-queue propagations Enables you to enable, disable, or configure the propagation schedule for each queue-to-queue propagation separately, without impacting other propagations Queue-to-Queue Propagation Unlike queue–to–database link propagation, queue-to-queue propagation has its own exclusive job to propagate messages from the source queue to the destination queue. Because each propagation job has its own propagation schedule, the propagation schedule of each queue-to-queue propagation can be managed separately. Even when multiple queue-to-queue propagations use the same database link, you can enable, disable, or set the propagation schedule for each queue-to-queue propagation separately. You can use the ADD_*_PROPAGATION_RULES procedures of the DBMS_STREAMS_ADM package to configure queue-to-queue propagation. Each procedure has an argument called queue_to_queue, which when set to TRUE configures the propagation as a queue-to-queue propagation. The default value for this argument is NULL, which means that the propagation is configured as a queue–to–database link propagation. Interoperability Notes Existing propagation schedules continue to work as queue–to–database link propagations following an upgrade to Oracle Database 10g, Release 2. Queue-to-queue propagation can be created only between databases with compatibility set to or higher. Oracle Database 11g: Implement Streams I - 216

217 Oracle Database 11g: Implement Streams I - 217
Queue-to-Queue Propagation (continued) If you use an Oracle Database of version or as a source database, propagation continues to function using queue–to–database link propagation. If you use an Oracle Database 10g, Release 2, database as the source database and propagate changes to an earlier version of Oracle Database, only queue–to–database link propagation is supported. To implement such a propagation, you must specify queue_to_queue => FALSE when creating the propagation rules. If the destination database is up when you create the propagation schedule, the target database compatibility is determined automatically and an error is raised appropriately during create propagation, if necessary. If the destination database is down when you create the propagation schedule, the compatibility check is deferred until propagation run time. If, at that time, it is discovered that queue-to-queue propagation has been specified for an Oracle Database of version or earlier, an unsupported error is raised indicating that the configuration is unsupported. In this case, any existing messages in the queue must be manually migrated to the destination queue. Oracle Database 11g: Implement Streams I - 217

218 Guaranteed Message Delivery
The destination queue notifies the source queue that a message has been received. The message remains in the source queue until it has been propagated to all destination sites. A captured message is propagated successfully to a destination when the message has: Been processed by all relevant apply processes at the destination Propagated successfully from the source queue to all its relevant destination queues Guaranteed Message Delivery A user-enqueued message is propagated successfully to a destination queue when the enqueue into the destination queue is committed. A captured message is propagated successfully to a destination queue when the following actions are completed: The message is processed by all relevant apply processes associated with the destination queue. The message is propagated successfully from the source queue to all its relevant destination queues. When a captured message is successfully propagated between two Streams queues, the destination queue acknowledges successful propagation of the message. If the source queue is configured to propagate a message to multiple destination queues, the message remains in the source queue until each destination queue has sent confirmation of message propagation to the source queue. When each destination queue acknowledges successful propagation of the message and all local consumers in the source queue database have consumed the message, the message is removed from the source queue by the AQ time-monitoring process. This confirmation system ensures the propagation of messages from the source queue to the destination queue. Oracle Database 11g: Implement Streams I - 218

219 Creating a Propagation
Automatically by using procedures in the DBMS_STREAMS_ADM package Manually by using the CREATE_PROPAGATION procedure of the DBMS_PROPAGATION_ADM package On creation: Status ENABLED Starting a propagation: Creation of propagation job Creating a Propagation Each procedure in the DBMS_STREAMS_ADM package that creates propagation rules does the following: Creates a propagation with an ENABLED status Depending on the inclusion_rule parameter, creates either a positive or negative rule set for the propagation, if the propagation does not have a rule set of the specified type associated with it Adds table, schema, global, subset, or user-enqueued message rules to the rule set The CREATE_PROPAGATION procedure of the DBMS_PROPAGATION_ADM package creates a propagation but does not create a rule set or rules for the propagation. You can assign rule sets to the propagation that is being created by specifying values for the rule_set_name and negative_rule_set_name parameters. Before creating a propagation, you must perform the following tasks: Create the source queue: Create a source queue for the propagation if it does not exist. Configure connectivity between the databases: Create a database link between the database that contains the source queue and the database that contains the destination queue, if the destination queue is in a different database. On creation, a propagation is enabled but not started. When a propagation is started, the propagation job is created. Propagation Oracle Database 11g: Implement Streams I - 219

220 Creating a Propagation: Example
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES( schema_name => 'HR', streams_name => 'prop_to_site3', source_queue_name=>'strmadmin.hr_queue', destination_queue_name => include_dml => true, include_ddl => true, include_tagged_lcr => false, source_database => 'site1.net', inclusion_rule => TRUE, queue_to_queue => FALSE); END; / Creating a Propagation: Example When it is run, the procedure shown in the slide performs the following actions: Creates a propagation named prop_to_site3, which is owned by the user who is executing this procedure Specifies that the propagation propagates LCRs from hr_queue in the current database to streams_queue in the IX schema in the SITE3.NET database Note: The source and destination queue names do not have to be the same. Specifies the destination site for the propagation. The destination_queue_name parameter contains SITE3.NET is the global name of the destination database; a database link with the same name must exist at the propagating site. Creates a positive rule set and associates it with the propagation if the propagation does not have a positive rule set. The rule set name is specified by the system. Creates two rules. One rule specifies that the row LCRs that contain the results of DML changes to tables in the HR schema are propagated to the destination queue. The other rule specifies that the DDL LCRs that contain the changes to tables in the HR schema are propagated to the destination queue. The rule names are specified by the system. Adds the two rules to the positive rule set that is associated with the propagation Oracle Database 11g: Implement Streams I - 220

221 Oracle Database 11g: Implement Streams I - 221
Creating a Propagation: Example (continued) Specifies the default value for the INCLUDE_TAGGED_LCR parameter. The parameter INCLUDE_TAGGED_LCR => FALSE is a flag indicating that the IS_NULL_TAG = 'Y' clause must be included in the generated rule condition. Effectively, this causes the propagation to handle only those LCRs that have a NULL tag. If INCLUDE_TAGGED_LCR is set to TRUE, the IS_NULL_TAG = 'Y' clause is not included in the rule condition. In this case, all LCRs (tagged or not) that satisfy the basic rule condition are propagated by the propagation job. Specifies that the rule condition must include a clause that checks the source database. In this case, the SOURCE_DATABASE_NAME = 'SITE1.NET' clause is appended to the rule condition. This limits the propagation to propagating only those changes that originated at the SITE1 database. Configures the propagation as a queue–to–database link propagation. A propagation job is created only if it does not exist for propagating messages to the SITE3 database. Oracle Database 11g: Implement Streams I - 221

222 Queue-to-Queue Propagation and Real Application Clusters
Job Queue–to– DB link AP3 AP3 CAP1 Failover Instance1 Instance2 Instance3 Node1 Node2 Node3 RAC_DB Service Queue-to-Queue Propagation and Real Application Clusters In a Real Application Clusters (RAC) environment, a Streams queue has its buffered queue on an instance. When that instance fails, the queue and buffered queue are migrated to a surviving instance. If you use queue–to–database link propagation, a propagation that sends messages to a queue in the RAC database uses a database link that points to one instance of the cluster. If that instance fails, the queue is migrated to a different instance. You must re-create the database link for the propagation at the source site to redirect the messages to the new owning instance of the queue. If you use queue-to-queue propagation, the database link on the source site uses a service for the RAC database instance of the service name for a single instance. This enables transparent failover of propagation to another instance if the primary RAC instance fails. Queue-to-queue Server Oracle Database 11g: Implement Streams I - 222

223 Oracle Database 11g: Implement Streams I - 223
Queue-to-Queue Propagation and Real Application Clusters (continued) To create a database link from the source database to the RAC database that uses the RAC database service, use a command similar to the following: CREATE DATABASE LINK <RACDB_GLOBAL_NAME> … USING 'CONNECT_DATA=(service_name=<RACDB_GLOBAL_NAME>)'; If you implement queue-to-queue propagation with a RAC database as the destination site, the propagation sends messages to the destination queue by using a queue service. A queue service is created automatically when you create subscribers for a buffered queue on a RAC database. This queue service is started on the instance that owns the buffered queue. If the instance that owns the buffered queue fails, the buffered queue is migrated to a new instance and the queue service is restarted on that instance. At any time, there can be only one queue service for the RAC database. Queue-to-queue propagation connects to the destination queue service when one exists. Currently, a queue service is created when the database is a RAC database and the queue is a buffered queue. Because the queue service always runs on the owner instance of the queue, transparent failover can occur when RAC instances fail. When multiple queue-to-queue propagations use a single database link, the connect description for each queue-to-queue propagation changes automatically to propagate messages to the correct destination queue. In contrast, queue–to–database link propagations require you to repoint your database links if the owner instance in a RAC database that contains the destination queue for the propagation fails. Oracle Database 11g: Implement Streams I - 223

224 Manually Creating a Propagation
BEGIN DBMS_PROPAGATION_ADM.CREATE_PROPAGATION ( propagation_name => 'prop_to_site2', source_queue => 'strmadmin.streams_queue', destination_queue=>'strmadmin.streams_queue', destination_dblink => 'site2.net', rule_set_name=>'strmadmin.stream1_rs', queue_to_queue => TRUE); END; / Manually Creating a Propagation The example in the slide shows how to create a propagation by using the CREATE_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package. This procedure performs the following actions: Creates a propagation named prop_to_site2, which is owned by the user who is executing this procedure. A propagation with the same name must not exist. Specifies that the propagation propagates messages from streams_queue on the current site to streams_queue in the SITE2.NET database, using the site2.net database link. The propagated messages may be captured LCRs, user-enqueued messages, or both kinds of messages. Note: The source and destination queue names do not have to be the same. Assigns the existing rule set named stream1_rs to the propagation as a positive rule set Configures the propagation as a queue-to-queue propagation Creates a propagation job owned by the user who is executing this procedure to propagate the messages Oracle Database 11g: Implement Streams I - 224

225 Oracle Database 11g: Implement Streams I - 225
Propagation Jobs Propagation uses the Scheduler interface. A propagation job is used to implement propagation. A propagation schedule specifies how often a propagation job propagates messages from a source queue to a destination queue. Scheduled jobs are owned by the SYS user. Scheduler procedures can be used to manage propagation scheduling. Propagation Jobs Starting in Oracle Database 11g, AQ propagation uses Oracle Scheduler, enabling AQ propagation to take advantage of Scheduler features and provide better scalability and response time. The background process JOB_QUEUE_PROCESS runs the job. Propagation schedule may be a dedicated process, which can run continuously, or it may be message driven. Job queue process parameters need not be set in Oracle Database 11g for propagation to work. Oracle Scheduler automatically starts up the required number of slaves for the existing propagation schedules. Propagation can be scheduled in the following modes: Dedicated schedule: Propagation runs forever or for a specified duration. This mode provides the lowest propagation latencies. Periodic schedule: Propagation runs periodically for a specified interval. This mode can be used when propagation runs in a batched mode. Message-based: Propagation is started when there are messages to be propagated. This mode makes more efficient use of the available resources while still providing a fast response time. Oracle Database 11g: Implement Streams I - 225

226 Oracle Database 11g: Implement Streams I - 226
Propagation Job To propagate to remote databases, you must: Configure network communication Enable job queue processes Create a database link Grant the database link access to the propagation job owner Propagation created by: DBMS_STREAMS_ADM.ADD_*_PROPAGATION_RULES DBMS_PROPAGATION_ADM.CREATE_PROPAGATION Starting propagation job: DBMS_PROPAGATION_ADM.START_PROPAGATION Visible in the DBA_SCHEDULER_JOBS view Propagation Job The administrator may choose a schedule that best meets the application performance requirements of his or her database. Oracle Scheduler starts the required number of job queue processes for the propagation schedules. Because the scheduler optimizes for throughput, if the system is heavily loaded, it may not run some propagation jobs. The resource manager may be used to have better control over the scheduling decisions. In particular, associating propagation jobs with different resource groups enables fair scheduling, which may be important in heavy-load situations. Like other Oracle Scheduler jobs, propagation jobs have an owner and use slave processes (jnnn) as needed to execute jobs. The following procedures create a propagation: In the DBMS_STREAMS_ADM package ADD_GLOBAL_PROPAGATION_RULES ADD_SCHEMA_PROPAGATION_RULES ADD_TABLE_PROPAGATION_RULES ADD_SUBSET_PROPAGATION_RULES In the DBMS_PROPAGATION_ADM package CREATE_PROPAGATION Oracle Database 11g: Implement Streams I - 226

227 Oracle Database 11g: Implement Streams I - 227
Propagation Job (continued) When a propagation is created, it has the ENABLED status. When a propagation is started, the propagation job is created and visible in the DBA_SCHEDULER_JOBS view. You see the Jnnn jobs only if a propagation job is actively running. When one of these procedures creates a propagation, a new propagation job is created in the following cases: If the queue_to_queue parameter is set to TRUE, a new propagation job is always created for the propagation. Each queue-to-queue propagation has its own propagation job. However, a slave process can be used by multiple propagation jobs. If the queue_to_queue parameter is set to FALSE, a propagation job is created when no propagation job exists for the source queue and the database link is specified. If a propagation job exists for the specified source queue and database link, the new propagation uses the existing propagation job and shares this propagation job with all the other queue–to–database link propagations that use the same database link. A propagation job for a queue–to–database link propagation can be used by more than one propagation. All destination queues at a database receive messages from a single source queue through a single propagation job. By using a single propagation job for multiple destination queues, Oracle Streams ensures that a message is sent to a destination database only once, even if the same message is received by multiple destination queues in the same database. Communication resources are conserved because messages are not sent more than once to the same database. Oracle Database 11g: Implement Streams I - 227

228 Propagation Scheduling
A propagation schedule: Specifies how often a job propagates messages from the source queue to the destination queue Uses default values when created through the DBMS_STREAMS_ADM or DBMS_PROPAGATION_ADM package Can be modified with the ALTER_PROPAGATION_SCHEDULE procedure in the DBMS_AQADM package Propagation Scheduling A propagation schedule specifies how often a propagation job propagates messages from a source queue to a destination queue. Each queue-to-queue propagation has its own propagation job and propagation schedule, but queue–to–database link propagations that use the same propagation job have the same propagation schedule. If you create a propagation schedule by adding propagation rules with the DBMS_STREAMS_ADM or the DBMS_PROPAGATION_ADM package, the propagation schedule is created using the default schedule values. You can alter the schedule for a propagation job by using the ALTER_PROPAGATION_SCHEDULE procedure in the DBMS_AQADM package. Changes made to a propagation job affect all propagations that use the propagation job. Oracle Database 11g: Implement Streams I - 228

229 Propagation Scheduling
Default propagation schedule properties: start_time is SYSDATE(). duration is NULL. next_time is NULL. latency is five seconds. Altering the propagation schedule: BEGIN DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE( queue_name => 'strmadmin.hr_queue', destination => 'SITE2.NET', duration => 300, next_time => 'SYSDATE + 900/86400', latency => 30); END; Propagation Scheduling A default propagation schedule is established when a new propagation job is created. It has the following properties: start_time: The initial start time for the propagation window for messages from the source queue to the destination duration: The length of the propagation window in seconds. A NULL value means that the propagation window lasts forever or until the propagation is unscheduled. next_time: A date function that is used to compute the start of the next propagation window from the end of the current window. A setting of NULL means that propagation restarts as soon as it finishes the current duration. latency: The wait time during which the schedule coordinator checks for more messages that need propagation The example in the slide modifies the propagation schedule so that a propagation window opens every 900 seconds (15 minutes), stays open for 300 seconds (5 minutes), and checks for messages to propagate every 30 seconds while the propagation window is open. Oracle Database 11g: Implement Streams I - 229

230 Combined Capture and Apply
Is automatic optimization that uses: A single publisher and consumer Direct transmission from capture to apply via database link (without queue staging and propagation) Combined capture and apply (CCA) information is available. GV$STREAMS_CAPTURE GV$STREAMS_APPLY_READER Capture Apply DBLink Combined Capture and Apply An optimization first available in Oracle Database 11g, Release 1, is a capture process that automatically sends LCRs directly to an apply process. This occurs when there is a single publisher and consumer defined for the queue that contains the captured changes. This optimized configuration is called combined capture and apply. When combined capture and apply (CCA) is in use, LCRs are transmitted directly from the capture process to the apply process via a database link. In this mode, the capture does not stage the LCRs in a queue or use queue propagation to deliver them. The GV$STREAMS_CAPTURE and GV$STREAMS_APPLY_READER views reflect the connect details and statistics when CCA is in use. CCA can be used when the capture process and the apply process reside on the same database or on different databases. CCA is automatically selected when certain requirements (on the following page) are met. Oracle Database 11g: Implement Streams I - 230

231 Combined Capture and Apply Requirements
Capture and apply in the same 11g database Capture and apply in different 11g databases The capture process and apply process use the same queue. The capture process queue has a single publisher: the capture process. There is no propagation. A propagation is configured between the capture and apply process queues. The queue has a single publisher: the capture process. A propagation is configured between the capture and apply queues. The queue has a single consumer: the apply process. The queue has a single consumer: the apply process. The apply process queue has a single publisher: the propagation. Combined Capture and Apply Requirements When the capture process and the apply process reside on the same database, CCA is possible only if all the following conditions are met: The database must be an Oracle Database 11g (11.1 or later) database. The capture process and apply process must use the same queue. The queue must have a single publisher, and it must be the capture process. The queue must have a single consumer for the buffered queue: the apply process. The queue can have one or more other apply processes as consumers for the persistent queue. When the capture process and apply process reside on different databases, CCA is possible only if all the following conditions are met: The database running the capture process and the database running the apply process must each be an Oracle Database 11g (11.1 or later) database. The capture process queue must have a single publisher: the capture process. A propagation must be configured between the capture process queue and the apply process queue. There can be no intermediate queues (no directed network). The capture process queue must have a single consumer, and it must be the propagation between the capture process queue and the apply process queue. The apply process queue must have a single publisher, and it must be the propagation between the capture process queue and the apply process queue. Oracle Database 11g: Implement Streams I - 231

232 How to Use Combined Capture and Apply
Automatically selected during startup of the capture process Detection of configuration changes if CCA conditions are not met: Automatic restart without CCA optimization Detection of configuration changes if CCA conditions are met: Switch to CCA only at the next restart How to Use Combined Capture and Apply If the requirements in the slide for CCA are met, the CCA optimization is automatically selected. During startup, the capture process detects that it can combine with the apply process, and implements the optimization. After it establishes a connection with the apply process, it sends the captured LCRs directly to the apply process. When the capture process and apply process are on the same database, the queue is not used to store LCRs. If CCA is used and you change the configuration so that it no longer meets the requirements of combined capture and apply, the capture process detects this change and restarts. After the capture process restarts, it uses propagations and queues to send messages instead of CCA. If an existing Streams configuration is changed to meet the requirements of CCA, the optimization is implemented automatically on the next restart of the capture process. In this case, you must restart the capture process manually; it is not restarted automatically. Oracle Database 11g: Implement Streams I - 232

233 Verifying the Use of CCA
GV$STREAMS_CAPTURE APPLY_NAME column STATE column WAITING FOR APPLY TO START when the apply process is disabled CONNECTING TO APPLY DATABASE when the apply process database is shut down WAITING FOR PROPAGATION TO START when the propagation is disabled or unscheduled APPLY_DBLINK column GV$STREAMS_APPLY_READER PROXY_SID column PROXY_SERIAL column Normal management of capture and apply processes Verifying the Use of CCA When CCA is used, the following information is included in the data dictionary. For the capture process, the APPLY_NAME column in the GV$STREAMS_CAPTURE view shows the name of the apply process to which the capture process sends LCRs. The APPLY_DBLINK column is populated with the database link to the remote database only if the apply process is at a remote database. When CCA is not used, these columns are not populated. For the capture process, the STATE column in the GV$STREAMS_CAPTURE view can show one of the following values: WAITING FOR APPLY TO START when the apply process is disabled CONNECTING TO APPLY DATABASE when the apply process database is shut down WAITING FOR PROPAGATION TO START when the propagation is disabled or unscheduled After the capture process establishes a connection with the apply process, it shows the normal capture process states, such as CAPTURING CHANGES and CREATING LCR. Oracle Database 11g: Implement Streams I - 233

234 Oracle Database 11g: Implement Streams I - 234
Verifying the Use of CCA (continued) For the apply process, the PROXY_SID and PROXY_SERIAL columns in the GV$STREAMS_APPLY_READER view are populated. These are the session ID and serial number, respectively, of the apply process network receiver that is responsible for direct communication between capture and apply. When combined capture and apply is not used, these columns are not populated. There are now some additional statistics in GV$STREAMS_APPLY_READER and GV$STREAMS_CAPTURE: GV$STREAMS_CAPTURE: APPLY_MESSAGES_SENT: Number of messages sent to apply APPLY_BYTES_SENT: Number of bytes sent to apply GV$STREAMS_APPLY_READER: PROXY_SPID: The operating system PID of the apply network receiver CAPTURE_BYTES_RECEIVED: Number of bytes received from capture Managing CCA When CCA is used, you can manage capture and apply processes normally. Specifically, you control the capture process and apply process behavior in the following ways: Changes must satisfy the capture and propagation rule sets to be captured by the capture process. LCRs must satisfy the apply process rule sets to be applied by the apply process. Rule-based transformations that are configured for the rules in the positive rule set of a capture process, propagation, or apply process are run when a rule evaluates to TRUE. LCRs are sent to apply handlers for an apply process when appropriate. Update conflict resolution handlers are invoked when appropriate during apply. Oracle Database 11g: Implement Streams I - 234

235 Oracle Database 11g: Implement Streams I - 235
Propagation Rules Determining: Which messages are propagated Which messages are excluded from propagation One propagation job for both LCR and non-LCR messages Creating propagation rules : DBMS_STREAMS_ADM DBMS_RULE_ADM Global Schema Object Rules Propagation Rules A propagation copies messages from one queue to another based on the rules that you define. A propagation may propagate all messages in a source queue to the destination queue or only a subset of the messages. Both LCR and non-LCR messages can be sent with the same propagation. You can use the following procedures in the DBMS_STREAMS_ADM package to easily create rules for propagating captured messages: ADD_GLOBAL_PROPAGATION_RULES ADD_SCHEMA_PROPAGATION_RULES ADD_TABLE_PROPAGATION_RULES ADD_SUBSET_PROPAGATION_RULES If you configure your Streams environment by using one of the MAINTAIN_* procedures in the DBMS_STREAMS_ADM package, the propagation between the source database and the destination database is automatically configured for you. Propagation Oracle Database 11g: Implement Streams I - 235

236 Oracle Database 11g: Implement Streams I - 236
Propagation Rules A propagation rule can be positive or negative. The propagation rule set is independent of the capture rule set. Captured objects are not automatically propagated. OE.ORDER_ ITEMS HR. EMPLOYEES HR.JOBS OE. ORDERS HR. COUNTRIES IX.USER_ MSG Propagation Rules (continued) For captured LCRs, rules determine the database objects for which changes are propagated as well as the types of changes to propagate. If a capture process does not capture a particular change, the change is not placed in the staging area. Because a propagation propagates only those messages in the staging area, you must have both a capture rule and a propagation rule to propagate a captured LCR. In the diagram in the slide, positive table-level propagation rules are defined for the JOBS, ORDERS, and ORDER_ITEMS tables. Because a capture rule was not created for the ORDER_ITEMS table, changes to the ORDER_ITEMS table that originate at this site are not placed in the staging area and are not propagated. The COUNTRIES table has a capture rule but no propagation rule, so its changes are captured but not propagated. LCR messages for the ORDER_ITEMS table that are enqueued into the staging area by a different propagation or by an application are propagated. Capture rules Propagation rules Oracle Database 11g: Implement Streams I - 236

237 Managing Propagation Rules
To specify or remove a rule set for a propagation, use the ALTER_PROPAGATION procedure of the DBMS_PROPAGATION_ADM package. To add rules to the positive or negative rule set of a propagation, use one of the procedures in the DBMS_STREAMS_ADM package. To remove a rule from a propagation rule set, use the REMOVE_RULE procedure of DBMS_STREAMS_ADM. Managing Propagation Rules You can specify the rule set that you want to associate with a propagation by using the rule_set_name or negative_rule_set_name parameter in the ALTER_PROPAGATION procedure of the DBMS_PROPAGATION_ADM package. For example, the following procedure sets the rule set for a propagation named prop_to_site2 to stream1_new_rs: SQL> EXECUTE DBMS_PROPAGATION_ADM.ALTER_PROPAGATION( - 2> propagation_name => 'prop_to_site2', - 3> rule_set_name => 'strmadmin.stream1_new_rs'); To automatically create and add rules to the rule set of a propagation, you can run one of the following procedures in the DBMS_STREAMS_ADM package: ADD_TABLE_PROPAGATION_RULES ADD_SCHEMA_PROPAGATION_RULES ADD_GLOBAL_PROPAGATION_RULES ADD_SUBSET_PROPAGATION_RULES MAINTAIN_GLOBAL, MAINTAIN_SCHEMAS, MAINTAIN_TABLES MAINTAIN_TTS, MAINTAIN_SIMPLE_TTS Note: An empty rule set is not the same as no rule set. An empty rule set evaluates to “nothing should be processed,” whereas no rule set implies that “everything is processed.” Oracle Database 11g: Implement Streams I - 237

238 Managing Propagation Rules: Example
To add a rule to a propagation: BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES( table_name => 'hr.employees', streams_name => 'prop_to_site3', source_queue_name=>'hr_queue', destination_queue_name => include_dml => true, include_ddl => true, include_tagged_lcr => false, source_database => 'site1.net', inclusion_rule => FALSE); END; / Managing Propagation Rules: Example To add a rule to an existing propagation, use the ADD_*_PROPAGATION_RULES procedures in the DBMS_STREAMS_ADM package. Specify the same Streams name, source queue, and destination queue as defined for the existing propagation. The example in the slide creates a table-level propagation rule and adds it to the negative rule set for the existing propagation. If a negative rule set is not being used by the specified propagation, one is created, the rule is added to the new rule set, and the rule set is associated with the propagation. Oracle Database 11g: Implement Streams I - 238

239 Managing Propagation Rules: Example
To remove a rule from a rule set for an existing propagation: BEGIN DBMS_STREAMS_ADM.REMOVE_RULE( rule_name => 'STRMADMIN.DEPARTMENTS3', streams_type => 'propagation', streams_name => 'prop_to_site3', drop_unused_rule => TRUE, inclusion_rule => FALSE); END; / Managing Propagation Rules: Example (continued) You can remove a rule from a rule set for an existing propagation by running the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package. For example, the procedural call shown in the slide removes a rule named DEPARTMENTS3 from the negative rule set of a propagation named prop_to_site3. Oracle Database 11g: Implement Streams I - 239

240 Managing Propagations
Use the DBMS_PROPAGATION_ADM package methods to manage propagation. The DBMS_PROPAGATION_ADM.START_PROPAGATION method starts a propagation. The DBMS_PROPAGATION_ADM.STOP_PROPAGATION method stops a propagation. Managing Propagations The DBMS_PROPAGATION_ADM package, which is one of a set of Streams packages, provides administrative interfaces for configuring a propagation from a source queue to a destination queue. This package provides the following subprograms: ALTER_PROPAGATION procedure: Adds, alters, or removes a rule set for a propagation CREATE_PROPAGATION procedure: Creates a propagation and specifies the source queue, destination queue, and rule set for the propagation DROP_PROPAGATION procedure: Drops a propagation START_PROPAGATION procedure: Starts a propagation STOP_PROPAGATION procedure: Stops a propagation In releases prior to Oracle Database 10g, Release 2, the following package methods were used to schedule method propagation: DBMS_AQADM.SCHEDULE_PROPAGATION/UNSCHEDULE_PROPAGATION DBMS_AQADM.ENABLE/DISABLE_PROPAGATION/SCHEDULE  Note: Stop the capture and apply processes before stopping the propagation. Otherwise, the capture and apply processes may not terminate because, in CCA, the capture process propagates directly to the apply process. Oracle Database 11g: Implement Streams I - 240

241 Monitoring Propagation
ALL_PROPAGATION, DBA_PROPAGATION DBA_QUEUE_SCHEDULES, USER_QUEUE_SCHEDULES V$PROPAGATION_RECEIVER V$PROPAGATION_SENDER V$BUFFERED_SUBSCRIBERS Monitoring Propagation The ALL_PROPAGATION and DBA_PROPAGATION views display the following information about all Streams propagations in the database: The propagation name The source queue owner and name, plus the destination queue owner and name The rule set owner and rule set name The destination database link name The DBA_QUEUE_SCHEDULES and USER_QUEUE_SCHEDULES views describe the current schedules and statistics for propagation jobs, including: The queue owner and queue name The start date and start time The propagation window, next scheduled time, and latency Whether the schedule is disabled The name of the process that is currently executing the schedule The cluster database instance number that is currently executing the schedule The date and time of the last successful execution Statistics on the total, maximum, and average counts of propagated messages The number of failures and information on the last reported error The method of propagation (MESSAGE_DELIVERY_MODE can be BUFFERED or PERSISTENT.) Oracle Database 11g: Implement Streams I - 241

242 Oracle Database 11g: Implement Streams I - 242
Monitoring Propagation (continued) V$PROPAGATION_RECEIVER displays information about the buffer queue propagation schedules at the receiving (destination) end. The values are reset to zero when the database (or instance in a RAC environment) restarts, when propagation migrates to another instance, or when an unscheduled propagation is attempted. You can view information such as: The source queue and database names The high-water mark of the messages received Acknowledgement of the messages received by the receiver Elapsed rule time Time spent enqueuing messages V$PROPAGATION_SENDER displays information about the buffer queue propagation schedules at the sending (source) end. The values are reset to zero when the database (or instance in a RAC environment) restarts, when propagation migrates to another instance, or when an unscheduled propagation is attempted. You can view information such as: The destination queue name and owner The database link used to propagate messages The high-water mark of the messages sent Status of the propagation schedule Total number of messages sent Total bytes of data sent Elapsed dequeue time Elapsed propagation time V$BUFFERED_SUBSCRIBERS displays information about subscribers for all buffered queues in the instance. There is one row per subscriber per queue. In Oracle Database 10g (Release 2 and later), there are two rows for each propagation: one for queue–to–database link propagation and one for queue-to-queue propagation. This view displays information such as: The source queue name and owner The subscriber name, address, and protocol The sequence number and message number of the last browsed message The sequence number of the most recently enqueued message for the subscriber The total number of outstanding messages currently enqueued in the buffered queue for the subscriber (including messages that have overflowed to disk) The total number of messages dequeued by the subscriber The total number of spilled messages for the subscriber Oracle Database 11g: Implement Streams I - 242

243 Oracle Database 11g: Implement Streams I - 243
Managing Propagation Managing Propagation When you click the Propagation tab on the Streams Overview page, the Propagation administration page appears. On this page, you can: View all propagations for a database View the rule sets used by each propagation Determine the source and destination queues for each propagation View the current status of each propagation Check for propagation failures or errors Manage the rules for a propagation by clicking the appropriate rule set name link View the statistics for a propagation Configure a new propagation or delete an existing propagation Oracle Database 11g: Implement Streams I - 243

244 Oracle Database 11g: Implement Streams I - 244
Summary In this lesson, you should have learned to: Describe how messages are propagated Schedule propagation between two queues Propagate captured messages Verify the propagation configuration Oracle Database 11g: Implement Streams I - 244

245 Practice 8 Overview: Configuring Propagation
This practice covers the following topics: Creating a propagation by using the DBMS_STREAMS_ADM package Querying the data dictionary for propagation information Determining the default propagation schedule Oracle Database 11g: Implement Streams I - 245

246 Result of Practices 6, 7, and 8: Configuring Propagation
AMER database EURO database STRM01_PROPAGATION OE tables OE tables Q_CAPTURE in Q_CAP_TAB STRM01_CAPTURE Redo logs Result of Practices 6, 7, and 8: Configuring Propagation Because Practice 8 builds on the previous practices, you can view the combined result in the graphic in the slide. Oracle Database 11g: Implement Streams I - 246

247 Apply Concepts and Configuration

248 Oracle Database 11g: Implement Streams I - 248
Objectives After completing this lesson, you should be able to: Describe the apply process architecture List which messages can and cannot be applied Configure one or more apply processes for a database Query the data dictionary for information about the apply process configuration Oracle Database 11g: Implement Streams I - 248

249 Oracle Database 11g: Implement Streams I - 249
What Is Apply? Apply is: The automatic dequeuing and processing of messages from a Streams queue or queue buffer Implemented as one or more Oracle background processes Each apply process has an apply user. SGA Streams pool What Is Apply? An apply process is an Oracle background process that dequeues messages from a queue and either applies each message directly to a database object or passes the message as a parameter to a user-defined procedure called an apply handler. Typically, an apply process applies messages to the local database where it is running. In a heterogeneous database environment, the apply process can be configured to apply messages at a remote non-Oracle database. You can create, alter, start, stop, and drop an apply process, and you can define apply rules that control which messages an apply process dequeues from the queue. The apply user is the user who applies the messages in the queue. This user must have the necessary privileges to dequeue the messages and apply the changes or execute the user-defined apply handler. However, the apply user may or may not be the user who created the apply process. Oracle Database 11g: Implement Streams I - 249

250 Processing Streams Messages
A single apply process dequeues either captured messages or user-enqueued messages from a specific queue. For captured messages, the apply process can: Execute the DML or DDL message directly Pass the message as a parameter to a user-defined procedure User-enqueued messages in a SYS.AnyData queue can also be dequeued explicitly through a user-created application. Captured messages cannot be explicitly dequeued. Processing Streams Messages An apply process dequeues either captured messages or user-enqueued messages from a specific queue. These messages can be logical change records (LCRs) or non-LCR messages. For captured LCR messages, the apply process can either execute the enclosed commands immediately or pass the message as a parameter to a user-defined procedure. An apply process can also dequeue non-LCR messages. However, the message must be of the SYS.AnyData type, and can contain any user data. The message is passed to a user-defined procedure called a message handler. Note: Captured LCR messages are stored in the queue buffers and, thus, cannot be explicitly dequeued; they can be consumed only by an apply process. Support for explicit dequeue allows application developers to use Oracle Streams to notify applications of changes to data, while still leveraging the change capture and propagation features of Oracle Streams. Streams supports a variety of open interfaces for dequeuing messages, including Java Message Service (JMS), C, and Simple Object Access Protocol (SOAP). Explicit dequeue of non-LCR messages is covered in the lessons titled “Message Queuing Concepts” and “Enqueuing and Dequeuing Messages.” Oracle Database 11g: Implement Streams I - 250

251 Oracle Database 11g: Implement Streams I - 251
Applying DDL Messages Successful application of DDL messages requires that the source and destination targets have matching: Storage parameters for CREATE statements Tablespace names and specifications Partitioning names and specifications Constraint and index names and specifications CREATE TABLE range_sales ( prod_id NUMBER(6) PRIMARY KEY, time_id DATE, amount_sold NUMBER(10,2)) TABLESPACE users PARTITION BY RANGE (time_id) (PARTITION sales_q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')), ...); Applying DDL Messages To apply captured DDL messages properly at a destination database, you must ensure that either the destination database has the same database structures as the source database or the database structures are not specified in the data definition language (DDL) statement. The captured DDL message is reconstructed from the redo logs on the source site. The same DDL statement is sent to the destination, to be applied to the database objects there. If any database object or structure specified in the DDL statement does not exist at the destination database, the apply process generates an error. Database structures include data files, tablespaces, partitions, and other physical and logical structures that support database objects. In some cases, a storage structure is not specified by name in the DDL statement, and a default value is used in the DDL statement. In such cases, the names and specifications can differ between the source and destination databases. For example, the following statement could create the index in a different tablespace on each site: CREATE INDEX oe.ord_stat_ix ON orders (order_status) INITRANS 5 STORAGE (NEXT 100K); Also, because not all storage parameters were set explicitly, parameters, such as PCTINCREASE could have different values at each site. Oracle Database 11g: Implement Streams I - 251

252 Other Considerations for Applying DDL
The user who is specified as the current_schema in the DDL LCR must exist at the destination database for the DDL LCR to be applied. Avoid using system-generated names in schemas where the DDL changes are captured. CREATE TABLE projects ( proj_id NUMBER CONSTRAINT proj_id_pk PRIMARY KEY, projname VARCHAR2(30) CONSTRAINT pname_nn NOT NULL, dept_id NUMBER REFERENCES HR.departments(department_id)); Other Considerations for Applying DDL For a DDL LCR to be applied to a destination database successfully, the user who is specified as the current_schema in the DDL LCR must exist in the destination database. In the examples in the slide, the user who runs the DDL command to create the PROJECTS table must also exist in the destination database. If you plan to capture DDL changes from a source database and apply these DDL changes to a destination database, avoid using system-generated names. If a DDL statement results in a system-generated name for an object, the name of the object typically will be different at the source database and at each destination database that is applying the DDL change from this source database. Different names for objects can result in apply errors for future DDL changes. For example, using the CREATE TABLE command shown in the slide, you may end up with the following constraints created for the table: CONSTRAINT_NAME TYPE PNAME_NN C PROJ_ID_PK P SYS_C R To modify or drop the foreign key constraint, you must reference the system-generated name, which is most likely different for each site. ALTER TABLE projects DROP CONSTRAINT sys_c005844; Oracle Database 11g: Implement Streams I - 252

253 DDL Messages That Are Not Applied
An apply process does not support the following types of DDL messages: [ALTER|CREATE|DROP] MATERIALIZED VIEW [ALTER|CREATE|DROP] MATERIALIZED VIEW LOG [CREATE|DROP] DATABASE LINK CREATE TABLE … AS SELECT on a clustered table CREATE SCHEMA AUTHORIZATION RENAME [ALTER|CREATE|DROP] SUMMARY DDL Messages That Are Not Applied Some types of DDL changes that are captured by a capture process cannot be applied by an apply process. If an apply process receives a DDL LCR that specifies an operation that cannot be applied, the apply process ignores the DDL LCR and records information about it in the trace file for the apply process. You may see a message in the trace file, such as: Apply process ignored the following DDL: An apply process applies all other types of DDL messages if the DDL LCRs containing the changes satisfy the rules in the apply process rule set. Note RENAME messages are not captured or applied, such as RENAME hr.jobs TO hr.old_jobs. If you need to replicate object-renaming commands, use the ALTER <object_type> <object_name> RENAME command instead of the RENAME <old_name> TO <new_name> command. An apply process can also apply user-enqueued DDL LCRs as long as the DDL uses the correct Oracle syntax. Oracle Database 11g: Implement Streams I - 253

254 Oracle Database 11g: Implement Streams I - 254
Applying Messages Messages are applied by an apply user. A transaction is considered applied or consumed when an apply process does one of the following: Commits a completed transaction Places a transaction in the error queue and commits the dequeue transaction Applying Messages Messages are applied by an apply user. The apply user applies all row changes resulting from data manipulation language (DML) operations and all DDL changes. The apply user also runs user-defined apply handlers. Each row LCR contains the identifier of the transaction in which the DML statement was run. A transaction can consist of one or more LCRs. An apply process always applies all the LCRs belonging to the same transaction as a unit to preserve transactional consistency with the source database. If an apply server process encounters an error, it tries to resolve the error with a user-specified error handler. If the apply server process cannot resolve the error, it rolls back any DML operations performed by the LCRs in the transaction, and places the entire transaction in the error queue. If an apply server process handles a transaction that has a dependency with another transaction, and it is not known if the dependent transaction has been applied, the apply server process contacts the apply coordinator process and waits for instructions. The apply coordinator process monitors all apply server processes to ensure that the transactions are applied and committed in the correct order. Note: Turning on apply parallelism requires additional supplemental logging at the source database. Oracle Database 11g: Implement Streams I - 254

255 Oracle Database 11g: Implement Streams I - 255
Error Queue Stores information about transactions that could not be successfully applied on the local database Is created automatically Contains all LCRs for each transaction not applied Stores error transactions for all apply processes in the database Error Queue The error queue stores information about transactions that could not be applied successfully by the apply processes running in a database. When an unhandled error occurs during apply, the apply process copies all messages in the transaction to the error queue automatically. The error queue contains information about errors encountered only at the local destination site. It does not contain information about errors at other sites in a Streams environment. When a transaction is moved to the error queue, all messages in the transaction are stored in a queue table, not in a buffered queue. When a queue table is created, an exception queue is automatically created for the queue table. Multiple queues may use a single queue table, and each queue table has one exception queue. Therefore, a single exception queue may store errors for multiple queues and multiple apply processes. An exception queue contains the apply errors only for its queue table, but the Streams error queue contains information about all apply errors in each exception queue in a database. Oracle Database 11g: Implement Streams I - 255

256 Oracle Database 11g: Implement Streams I - 256
Error Queue (continued) Note You must use the procedures in the DBMS_APPLY_ADM package to manage Streams apply errors. You must not dequeue apply errors directly from an exception queue. Messaging client errors are also stored in the error queue. If an application explicitly dequeues a message without using a messaging client, any errors encountered during the dequeue process are not stored in the error queue. Oracle Database 11g: Implement Streams I - 256

257 Required Apply User Privileges
For a user to be able to apply messages, the following privileges are required: Secure queue user privileges for the queue Dequeue privileges for the queue EXECUTE privileges for: Rule sets used by the apply process Apply handler PL/SQL procedures called by the apply process Rule-based transformation functions called by the apply process DML and DDL privileges on the database objects and schemas Required Apply User Privileges The apply user is the user who applies all DML changes and DDL changes that satisfy the apply process rule sets and who runs user-defined apply handlers. You specify the apply user for an apply process by using the apply_user parameter in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. To change the apply user, the user who invokes the ALTER_APPLY procedure must have been granted the DBA role. Running the ALTER_APPLY procedure grants the new apply user the dequeue privilege on the queue associated with the apply process, and configures the user as a secure queue user of the queue. In addition, if the apply user does not own the following database objects, you must grant the apply user EXECUTE privileges for these objects: Rule sets used by the apply process All rule-based transformation functions used in the rule set All apply handler procedures that may be called by the apply process These privileges must be granted directly to the apply user. They cannot be granted through roles. Oracle Database 11g: Implement Streams I - 257

258 Oracle Database 11g: Implement Streams I - 258
Required Apply User Privileges (continued) The user designated as the apply user must have the necessary privileges to perform SQL operations on the replicated objects. Specifically, the following privileges are required: For table-level DML changes, the INSERT, UPDATE, DELETE, and SELECT privileges must be granted. For table-level DDL changes, the ALTER TABLE privilege must be granted. For schema-level changes, the CREATE ANY TABLE, CREATE ANY INDEX, CREATE ANY PROCEDURE, ALTER ANY TABLE, and ALTER ANY PROCEDURE privileges must be granted. For global-level changes, you must use the GRANT ALL PRIVILEGES TO <username> command for the apply user. The apply user privileges must be granted by an explicit grant of each privilege. You cannot grant these privileges to the Streams apply user through a role. Oracle Database 11g: Implement Streams I - 258

259 Configuring Apply Processes
You must configure: A separate apply process for each source site Separate apply processes for captured and user-enqueued messages Each apply process works independently of other apply processes in the same database. LCRs LCRs APL1 Sydney APS1 London APL2 NY Configuring Apply Processes Each apply process can apply captured messages from only one capture process. If there are multiple capture processes running on a source database, and if LCRs from more than one of these capture processes are applied to a destination database, there must be one apply process to apply changes from each capture process. Each Streams queue used by a capture process or apply process should have captured messages from at the most one capture process from a particular source database. A queue can contain LCRs from multiple capture processes if each capture process is capturing changes at a different source database. If you have redo-based capture, synchronous capture, and user-enqueued message generating LCRs, each of these requires a separate apply process. If you create multiple apply processes in a database, the apply processes are completely independent of each other. These apply processes do not synchronize with each other, even if they apply LCRs from the same source database. User-enqueued messages include both user-created LCR messages and non-LCR messages. Note: In contrast to an apply process that applies captured messages, a single apply process applying user-enqueued messages can apply messages from any number of source databases, including the apply database. However, it is recommended to have one apply process for each source. User-enqueued messages Oracle Database 11g: Implement Streams I - 259

260 Apply Process Components
The apply engine consists of: One reader server process One coordinator process One or more apply server processes Parallelization of apply maximizes concurrency. Apply Apply coordinator Reader Apply Apply Process Components The apply engine consists of the following processes: A reader server process that dequeues messages from the queue or queue buffers. The reader server process is a parallel execution server process that computes dependencies between LCRs, and assembles messages into transactions. The reader server process returns the assembled transactions to the coordinator process, which then assigns them to idle apply server processes. A coordinator process that gets transactions from the reader and passes them to the apply server processes. The coordinator process name is APnn, where nn is a coordinator process number (0–9 and a–z). The coordinator process is an Oracle background process. The valid reader process names are from AS00 to ASnn. A number of apply server processes (equal to the value of the apply parallelism parameter). These apply server processes apply LCRs to database objects as DML or DDL statements, or pass the LCRs to their appropriate handlers. For non-LCR messages, the apply server processes pass the messages to the message handler. Each apply server process is implemented as a parallel execution server process. When an apply server process commits a completed transaction, the transaction has been applied. When an apply server process places a transaction in the error queue and commits, the transaction is also marked as applied. Apply Staging area Oracle Database 11g: Implement Streams I - 260

261 Oracle Database 11g: Implement Streams I - 261
Apply Rules Determine which messages are applied Can be positive or negative Are specified at the table, schema, or global level Include subset rules Can be customized to further restrict the data that is applied Apply Rules Object Schema Global Apply Rules An apply process applies changes or calls handlers based on the rules that you define. Positive rules specify the database objects to which an apply process applies changes and the types of changes to apply. Negative rules specify the database objects for which apply should not apply change messages. You can create DML or DDL rules at the following levels for apply: Table Schema Global or database Subset of the changes to a particular table For non-LCR messages, you create positive and negative messaging rules by using the ADD_MESSAGE_RULE procedure of the DBMS_STREAMS_ADM package. This is covered in the lessons titled “Message Queuing Concepts” and “Enqueuing and Dequeuing Messages.” You can use the AND_CONDITION parameter when appending an AND condition to a system-created rule. You can also use the DBMS_RULE_ADM package to create complex apply rules. Oracle Database 11g: Implement Streams I - 261

262 Enqueue Destination During Apply
BEGIN DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION( rule_name => 'strmadmin.orders5', destination_queue_name=>'ix.order_evt_queue'); END; / SGA Shared pool Large pool Streams pool enqueue Enqueue Destination During Apply When specifying a destination queue for a message, you may or may not want the message to be executed by the apply process. For example, you may want to: Execute the message manually at a later time Use an application to explicitly dequeue the message and process it Use a propagation that is defined on the destination queue to forward the message down another data stream path You can use the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package to instruct an apply process to enqueue messages into a specified queue of type SYS.AnyData when the message satisfies the associated rule. This procedure modifies the specified rule’s action context to specify the queue. The name-value pair for an enqueue destination is ('APPLY$_ENQUEUE', '<queue name>'). Only queues within the same database can be specified. Setting this parameter enables automatic enqueue of messages into a persistent queue so that other applications can access and manipulate the messages. Using the SET_ENQUEUE_DESTINATION procedure, you can customize the routing of messages based on the rules used by the apply process. For example, you can set the destination queue for a rule named ORDERS5 to the queue IX.ORDER_EVT_QUEUE. Any apply process in the local database with the ORDERS5 rule in its positive rule set will enqueue a message into IX.ORDER_EVT_QUEUE if the message satisfies the ORDERS5 rule. StagingQ DestQueue Oracle Database 11g: Implement Streams I - 262

263 Oracle Database 11g: Implement Streams I - 263
Enqueue Destination During Apply (continued) If you want to change the destination queue for the ORDERS5 rule to a different queue—for example, the IX.STREAMS_MESG_QUEUE—run the SET_ENQUEUE_DESTINATION procedure again and specify the new destination queue. The apply user of the apply processes using the specified rule must have the necessary privileges to enqueue messages into the specified queue. If the queue is a secure queue, the apply user must be a secure queue user of the queue. An LCR that has been enqueued into an alternate queue by the SET_ENQUEUE_DESTINATION procedure has the same characteristics as an LCR that was explicitly enqueued into the target queue. This means that the LCR can be explicitly dequeued, or it can be executed by an apply process that is configured to process user-enqueued messages. Oracle Database 11g: Implement Streams I - 263

264 Execution Directives During Apply
BEGIN DBMS_APPLY_ADM.SET_EXECUTE( rule_name => 'strmadmin.orders5', execute => TRUE); END; / SGA Shared pool Large pool Streams pool DestQueue Execution Directives During Apply You can use the SET_EXECUTE procedure in the DBMS_APPLY_ADM package to specify whether a message that satisfies the specified rule is executed by an apply process: If the execute parameter is set to FALSE, the LCR is not executed. If the execute parameter is set to TRUE, the message is executed. If the message is an LCR and the message is not executed, the change encapsulated in the LCR is not made to the relevant local database object. Also, if the message is not executed, it is not sent to any apply handler. By default, an apply process executes messages that satisfy a rule in the positive rule set for the apply process, assuming that the message does not satisfy any rule in the negative rule set for the apply process. Therefore, you need to set only the execute parameter to TRUE for a rule if this parameter was set to FALSE at some time in the past. StagingQ enqueue Oracle Database 11g: Implement Streams I - 264

265 Oracle Database 11g: Implement Streams I - 265
Execution Directives During Apply (continued) If you specify a value of FALSE for the execute parameter by using the SET_EXECUTE procedure, the specified rule’s action context is modified to contain the name-value pair for an execution directive. This name-value pair has APPLY$_EXECUTE for the name, and the value is a SYS.AnyData instance that contains 'NO' as a VARCHAR2. If you use the default value of TRUE for the execute parameter, the specified rule’s action context is not modified. Oracle Database 11g: Implement Streams I - 265

266 Apply Process and Column Discrepancies
The apply engine supports column discrepancies between the source and destination tables: Column data type mismatch: Use a transformation or DML handler on apply. Fewer columns at the destination: Use a transformation or DML handler to remove extraneous or unnecessary columns from the LCR. Extra columns at the destination: Use default value or NULL extra columns. If extra columns are used for dependency evaluations, the transaction is placed in the error queue. Apply Process and Column Discrepancies A column discrepancy is a difference between the columns of a table in a source database and the columns of the same table in a destination database. If there are column discrepancies in your Streams environment, use rule-based transformations or DML handlers to match the columns in the row LCRs between the tables in the source and destination databases. If the data type for a column of a table in the destination database does not match the data type for the same column in the source database, an apply process places the transactions containing changes to the mismatched column in the error queue. To avoid such an error, you can create a rule-based transformation or DML handler that converts the data type. If the table in the destination database is missing one or more columns that are in the table in the source database, an apply process raises an error and moves the transaction that caused the error into the error queue. You can avoid such an error by creating a rule-based transformation or DML handler that eliminates the missing columns from the LCRs before they are applied. Specifically, the transformation or handler can remove the extra columns by using the DELETE_COLUMN member procedure on the row LCR. Oracle Database 11g: Implement Streams I - 266

267 Oracle Database 11g: Implement Streams I - 267
Apply Process and Column Discrepancies (continued) If the table in the destination database has more columns than the table in the source database, the apply process behavior depends on whether the extra columns are required for dependency computations. If the extra columns are not used for dependency computations, an apply process applies the changes to the destination table. In this case, if column defaults exist for the extra columns at the destination database, these default values are used during insert operations. Otherwise, these inserted columns are NULL. If the extra columns are used for dependency computations, the apply process places any transactions that include these columns in the error queue. Oracle Database 11g: Implement Streams I - 267

268 Oracle Database 11g: Implement Streams I - 268
Column Dependencies Dependencies may exist: Among LCRs within a single transaction Between different transactions The following types of columns are required for dependency computations: All key columns (For INSERT and DELETE statements) All constraint-related columns (For UPDATE statements) All columns related to a constraint if that constraint’s column is changed Column Dependencies Examples of dependencies: Foreign key constraints Updates to one column of a multiple-column constraint A row INSERT followed by an UPDATE to the same row If a transaction that is being handled by an apply server has a dependency with another transaction that is not known to have been applied, the apply server contacts the coordinator and waits for instructions. The coordinator monitors all apply servers to ensure that the transactions are applied and committed in the correct order. For example, consider the case in which a row is inserted into a table (transaction 1) and, in a separate transaction, the same row is updated to change certain column values (transaction 2). In this situation, transaction 2 is dependent on transaction 1 because you cannot update the row until after it is inserted into the table. Oracle Database 11g: Implement Streams I - 268

269 Virtual Dependency: Definition
Assist the apply process in computing transaction dependencies beyond the existing database constraints Enable apply handlers to process transformed LCRs correctly, avoiding errors Can improve performance by allowing the apply handlers to process LCRs in parallel Data Integrity constraint Virtual Dependency: Definition When apply process parallelism is set to 1, a single apply server applies the transactions in the same order in which they were executed on the source database. However, when apply process parallelism is set to a value greater than 1, multiple apply servers apply the messages simultaneously. One apply server may be applying a transaction that has a dependency on another transaction that has not yet been applied. If the names of shared database objects are the same at the source and destination databases, and if the objects are in the same schemas in these databases, an apply process automatically detects dependencies between row LCRs, regardless of apply process parallelism, assuming that constraints are defined for the database objects in the destination database. Information about these constraints is stored in the data dictionary present in the destination database. Apply processes detect dependencies for both captured and user-enqueued row LCRs. If a shared database object is in different schemas in the source and destination databases, the Streams data dictionary in the destination database does not contain the information needed to detect dependencies between transactions. Apply errors or incorrect processing may result when an apply process cannot determine dependencies properly. Application code Table Oracle Database 11g: Implement Streams I - 269

270 Oracle Database 11g: Implement Streams I - 270
Virtual Dependency: Definition (continued) An apply process requires additional information to detect dependencies in row LCRs that are being applied in parallel in the following cases: The data dictionary in the destination database does not contain the required information. This can occur when: A shared database object has a different name or is in a different schema in the source database and destination database Database constraints are not defined at the destination site to improve performance of the apply process An application enforces dependencies during database operations instead of using database constraints. This relationship is not recorded in the data dictionary present in the destination database. Data is denormalized by an apply handler after dependency computation, such as when you split the information in a single row LCR to create row LCRs for multiple tables in the destination site. In some of the cases described in the preceding list, rule-based transformations can be used to avoid apply problems. For example, if a shared database object is in different schemas at the source and destination databases, a rule-based transformation can change the schema in the appropriate LCRs. However, the disadvantage with using rule-based transformations is that they cannot be executed in parallel. To optimize the performance of apply in these cases, you can create virtual dependency definitions. Virtual dependency definitions provide the required information so that apply processes can detect dependencies correctly before applying LCRs directly or passing LCRs to apply handlers. Virtual dependency definitions enable apply handlers to process these LCRs correctly, and the apply handlers can process them in parallel to improve performance. Oracle Database 11g: Implement Streams I - 270

271 Oracle Database 11g: Implement Streams I - 271
Dependency Types Value dependency: Is a virtual dependency definition that defines a table constraint, such as Unique key A relationship between the columns of two or more tables Uses LCRs from only one source Requires supplemental logging Object dependency: Defines a parent-child relationship Requires a parent row change with a lower SCN to be applied before a child row change with a higher SCN Must be used only if the relationship is not defined by constraints at the target database Dependency Types Virtual dependency definitions enable an apply process to detect additional dependencies. The apply process uses these virtual dependencies to apply transactions in the correct order. There are two types of virtual dependency definitions: A value dependency defines a table constraint, such as a unique key, or a relationship between the columns of two or more tables. Value dependencies are useful when relationships between columns in tables are not described by constraints in the data dictionary at the destination database. Value dependencies may define virtual foreign key relationships between tables and may involve more than two tables. An apply process uses the value dependencies to determine when two or more row LCRs in different transactions involve the same row in a table in the destination database. An object dependency defines a parent-child relationship between two objects in a destination database. Object dependencies are useful when relationships between tables are not described by constraints in the data dictionary at the destination database. An apply process schedules execution of transactions that involve the child object after all transactions with lower commit system change number (CSCN) values that involve the parent object have been committed. An apply process uses the object identifier in each row LCR to detect dependencies. Oracle Database 11g: Implement Streams I - 271

272 Defining Dependencies
Value dependencies: Object dependencies: DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY( 'EMP_VPK', 'DEMO.EMPLOYEES', 'LAST_NAME,HIRE_DATE,JOB_ID'); DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY( 'EMP_MGR_VFK', 'DEMO.EMPLOYEES', 'EMPLOYEE_ID'); 'EMP_MGR_VFK', 'HR.DEPARTMENTS', 'MANAGER_ID'); DBMS_APPLY_ADM.CREATE_OBJECT_DEPENDENCY( 'DEMO.EMPLOYEES', 'HR.DEPARTMENTS'); Defining Dependencies Value Dependencies Use the SET_VALUE_DEPENDENCY procedure in the DBMS_APPLY_ADM package to define or remove a value dependency at a destination database. An apply process uses a value dependency to detect dependencies between row LCRs that contain the columns defined in the value dependency. When creating a value dependency, you specify: The name of the value dependency that you want to create or modify The name of the table, specified as [schema_name.]table_name. If the schema is not specified, the current user is the default. If you specify NULL and the named dependency exists, the dependency is removed. One of the following: A comma-delimited list of column names in the table, with no spaces between entries A PL/SQL index-by table of the DBMS_UTILITY.NAME_ARRAY type that contains the names of columns in the table. The first column name must be at position 1, the second at position 2, and so on. NULL, if you are removing the dependency Oracle Database 11g: Implement Streams I - 272

273 Oracle Database 11g: Implement Streams I - 273
Defining Dependencies (continued) Value Dependencies (continued) The following restrictions pertain to value dependencies: The row LCRs that involve the database objects specified in a value dependency must originate from a single source database. Each value dependency must contain only one set of attributes for a particular database object. Also, any columns specified in a value dependency at a destination database must be supplementally logged at the source database. These columns must be unconditionally logged. The first example in the slide on the previous page defines a value dependency for the DEMO.EMPLOYEES table. It defines a dependency between the LAST_NAME, HIRE_DATE, and JOB_ID columns. The second example in the slide also defines a value dependency. It specifies a relationship between the EMPLOYEE_ID column in the DEMO.EMPLOYEES table and the MANAGER_ID column in the HR.DEPARTMENTS table. Object Dependencies Use the CREATE_OBJECT_DEPENDENCY procedure to create an object dependency at a destination database. Use the DROP_OBJECT_DEPENDENCY procedure to drop an object dependency at a destination database. Both these procedures are in the DBMS_APPLY_ADM package. When creating an object dependency, you specify: The name of the child database object, specified as [schema_name.]object_name. If the schema is not specified, the current user is the default. The name of the parent database object, specified as [schema_name.]object_name. If the schema is not specified, the current user is the default. The last example in the slide on the previous page defines an object dependency. It specifies a relationship between the DEMO.EMPLOYEES table and the HR.DEPARTMENTS table. The DEPARTMENTS table is the parent object. Note Tables with circular dependencies may result in apply process deadlocks when apply process parallelism is greater than 1. For example, Table A has a foreign key constraint on table B, and table B has a foreign key constraint on table A. Apply process deadlocks are possible when two or more transactions that involve tables with circular dependencies commit at the same SCN. Oracle Database 11g: Implement Streams I - 273

274 Creating an Apply Process
You can create an apply process: Automatically by using the ADD_*_RULES procedures of the DBMS_STREAMS_ADM package Manually by using the CREATE_APPLY procedure of the DBMS_APPLY_ADM package A SYS.AnyData queue must exist before creating the apply process. The apply process is created in the disabled state. Creating an Apply Process You can use any of the following procedures to create an apply process: DBMS_STREAMS_ADM.ADD_TABLE_RULES DBMS_STREAMS_ADM.ADD_SUBSET_RULES DBMS_STREAMS_ADM.ADD_SCHEMA_RULES DBMS_STREAMS_ADM.ADD_GLOBAL_RULES DBMS_STREAMS_ADM.ADD_MESSAGE_RULE DBMS_STREAMS_ADM.MAINTAIN_* DBMS_APPLY_ADM.CREATE_APPLY The CREATE_APPLY procedure creates an apply process but does not create a rule set or rules for the apply process. You must use the CREATE_APPLY procedure to create an apply process that applies messages at a remote database. Note: By default, an apply process created with the CREATE_APPLY procedure in the DBMS_APPLY_ADM package applies user-enqueued messages. After creation, an apply process is disabled so that you can set the apply process parameters for your environment before starting it for the first time. Apply process parameters control the way in which an apply process operates. Oracle Database 11g: Implement Streams I - 274

275 Creating an Apply Process
Example: Creating an apply process for captured messages BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES ( schema_name => 'HR', streams_type => 'apply', streams_name => 'apply_site1_lcrs', queue_name => 'ix.streams_queue', include_dml => TRUE, include_ddl => FALSE, source_database => 'site1.net'); END; / Creating an Apply Process (continued) When run, the procedure in the slide performs the following actions: Creates an apply process named APPLY_SITE1_LCRS that applies captured messages to the local database. The apply process is created only if it does not exist. Associates the apply process with an existing queue named STREAMS_QUEUE in the IX schema Creates a positive rule set and associates it with the apply process, if the apply process does not have a positive rule set. The rule set uses the SYS.STREAMS$_EVALUATION_CONTEXT evaluation context. The rule set name is specified by the system. Creates one rule for the apply process to apply row LCRs that contain the results of DML changes to the database objects in the HR schema at SITE1.NET Adds the rule to the rule set that is associated with the apply process Sets the apply_tag for the apply process to hexadecimal 00 (double zero). Redo entries that are generated by the apply process have a tag with this value. Uses the default value for the include_tagged_lcr parameter, which is FALSE. This results in the clause is_null_tag='Y' being added to the generated rule, effectively causing the apply process to handle only those LCRs that have a NULL tag. Oracle Database 11g: Implement Streams I - 275

276 Creating an Apply Process
Example: Manually creating an apply process for applying user-enqueued LCR messages BEGIN DBMS_APPLY_ADM.CREATE_APPLY ( queue_name => 'strmadmin.streams_queue', apply_name => 'apply_site1_lcrs', rule_set_name =>'ix.stream1_rs', negative_rule_set_name=>'ix.stream1_neg_rs', apply_user => 'IX', apply_tag => NULL, apply_captured => FALSE); END; Creating an Apply Process (continued) The example in the slide creates an apply process that applies user-enqueued LCR messages. Running the procedure in the slide performs the following actions: Creates an apply process named APPLY_SITE1_LCRS. However, an apply process with the same name must not exist. Associates the apply process with a queue owned by STRMADMIN that is named STREAMS_QUEUE Associates the IX.STREAM1_RS and IX.STREAM1_NEG_RS rule sets with the apply process Specifies that the apply process does not use a DDL handler ( ddl_handler default value) or a precommit handler (precommit_handler default value) Sets IX as the apply user Specifies that the apply process applies changes to the local database (apply_database_link default value is NULL) Specifies that each redo entry that is generated by the apply process has a NULL tag Specifies that user-enqueued messages, not captured messages, are to be applied Oracle Database 11g: Implement Streams I - 276

277 Modifying the Apply Process
Set parameters with the SET_PARAMETER procedure of DBMS_APPLY_ADM. The value for a parameter is always entered as a VARCHAR2, regardless of the parameter’s data type. EXEC DBMS_APPLY_ADM.SET_PARAMETER ( - 'apply_site1_lcrs','disable_on_error','N'); Modifying the Apply Process Set an apply process parameter by using the SET_PARAMETER procedure in the DBMS_APPLY_ADM package. Apply process parameters control the way an apply process operates. The example in the slide sets the disable_on_error parameter for the apply process APPLY_SITE1_LCRS to FALSE. When using the SET_PARAMETER procedure, you always specify the parameter value as a VARCHAR2, even if the value of the parameter is a number or Boolean data type. This allows the SET_PARAMETER procedure to be used for setting any type of parameter. You can run the START_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to start an existing propagation. Oracle Database 11g: Implement Streams I - 277

278 Apply Process Parameters
ALLOW_DUPLICATE_ROWS COMMIT_SERIALIZATION DISABLE_ON_ERROR DISABLE_ON_LIMIT MAXIMUM_SCN PARALLELISM TIME_LIMIT TRACE_LEVEL TRANSACTION_LIMIT TXN_LCR_SPILL_THRESHOLD WRITE_ALERT_LOG PRESERVE_ENCRYPTION STARTUP_SECONDS Apply Process Parameters Apply process parameters control the way in which an apply process operates. In Oracle Database 11g, apply processes are renamed AP00 …APnn. The allow_duplicate_rows apply process parameter that was introduced in Oracle Database 10g, Release2 can be set to true to allow an apply process to apply a row LCR that changes more than one row. If the allow_duplicate_rows parameter is set to Y and more than one row is changed by a single row LCR with an UPDATE or DELETE command type, the apply process updates or deletes only one of the rows. If the parameter is set to N, the apply process raises an error when it encounters a single row LCR with an UPDATE or DELETE command that attempts to change multiple rows in a table. Regardless of the setting for this parameter, apply processes do not allow changes to duplicate rows for tables with LOB, LONG, or LONG RAW columns. Oracle Database 11g: Implement Streams I - 278

279 Oracle Database 11g: Implement Streams I - 279
Apply Process Parameters (continued) Apply servers may apply nondependent transactions at the destination database in an order that is different from the commit order at the source database, depending on the value of the commit_serialization parameter. Dependent transactions are always applied at the destination database in the same order as they were committed at the source database. The commit_serialization parameter can have one of the following values: FULL: An apply process commits applied transactions in the order in which they committed at the source database (default value). NONE: An apply process may commit transactions in any order. You get the best performance with this value. The disable_on_error parameter specifies whether or not the apply process must be disabled on the first unresolved error, even if the error is not fatal. The default value is Y. The disable_on_limit parameter specifies whether or not the apply process must be disabled if it reaches the value that is specified by the time_limit or transaction_limit parameter. If disable_on_limit is set to N, the apply process restarts automatically after it reaches a limit. The default value is N. The maximum_scn parameter indicates that the apply process must be disabled before applying a transaction with a commit SCN greater than or equal to the value that is specified. If infinite, the apply process runs regardless of the SCN value. The default value is INFINITE. The parallelism apply process parameter specifies the number of apply servers that may concurrently apply transactions. If parallelism is set to a value greater than 1, the apply process uses the specified number of apply servers to apply transactions concurrently. The default value is 1. The preserve_encryption parameter determines whether to preserve column encryption with transparent data encryption. If the parameter is set to Y (default), destination columns must be encrypted when the corresponding columns in the row LCRs are encrypted. If the columns are encrypted in row LCRs but the corresponding destination columns are not encrypted in the tables at the destination database, an error is raised when the apply process tries to apply the row LCRs. If you set the parameter to N, destination columns do not need to be encrypted when the corresponding columns in the row LCRs are encrypted. If the columns are encrypted in row LCRs but the corresponding columns are not encrypted in the tables at the destination database, the apply process applies the changes in the row LCRs. The startup_seconds parameter specifies the maximum number of seconds to wait for another instantiation of the same apply process to finish. Possible values are 0 (zero, default), a positive integer, or infinite. If the other instantiation of the same apply process does not finish within this time, the apply process does not start. Oracle Database 11g: Implement Streams I - 279

280 Oracle Database 11g: Implement Streams I - 280
Apply Process Parameters (continued) Note: Setting the parallelism parameter to a number higher than the number of available parallel execution servers may disable the apply process. Ensure that the PROCESSES and PARALLEL_MAX_SERVERS initialization parameters are set appropriately when you set the parallelism apply process parameter. By using the time_limit apply process parameter, you can specify the amount of time (in seconds) that an apply process runs before it is shut down automatically. The default value is INFINITE. You must use the trace_level parameter only under the guidance of Oracle Support Services. By using the transaction_limit apply process parameter, you can specify the number of transactions that an apply process applies before it is shut down automatically. If infinite, the apply process continues to run regardless of the number of transactions applied. The default value is INFINITE. The apply process begins to spill messages from memory to disk for a particular transaction when the number of messages in memory for the transaction exceeds the number specified by the txn_lcr_spill_threshold parameter. The number of messages in the first chunk of messages spilled from memory equals the number specified for this parameter, and the number of messages spilled in future chunks is either 100 or the number specified for this parameter, whichever is less. When the reader server spills messages from memory, the messages are stored in a database table on the hard disk. However, these messages are not spilled from memory to a queue table. Message spilling occurs at the transaction level. For example, if this parameter is set to 10000, and the reader server of an apply process is assembling two transactions, one with 7,500 messages and another with 8,000 messages, it does not spill any messages. You can set this parameter to the constant value infinite, in which case the apply process does not spill messages to disk. The write_alert_log parameter determines whether or not the apply process writes a message to the alert log on exit. Oracle Database 11g: Implement Streams I - 280

281 Managing Apply Processes
The DBMS_APPLY_ADM package contains the following procedures: START_APPLY ALTER_APPLY, which you can use to: Specify or remove a rule set Specify or remove a message, DDL, or precommit handler Specify an apply user Specify or remove a tag for apply events STOP_APPLY DROP_APPLY Managing Apply Processes Use the START_APPLY procedure to start an apply process. Use the STOP_APPLY procedure to stop an apply process. Run the DROP_APPLY procedure to drop an apply process. However, you must stop the apply process before you drop it. By using the ALTER_APPLY procedure, you can: Associate either a positive or negative rule set with an apply process Remove either a positive or negative rule set from the apply process Add or remove a message, DDL, or precommit handler Specify the user who applies all DML and DDL changes, and who executes the apply handlers Specify a nondefault binary tag that is added to redo entries generated by the specified apply process Remove the apply tag that is used by the specified apply process. The apply process– generated redo entries will then have NULL tags. Oracle Database 11g: Implement Streams I - 281

282 Oracle Database 11g: Implement Streams I - 282
Managing Apply Processes (continued) DROP_APPLY Procedure This procedure drops the apply process. The syntax is as follows: Syntax DBMS_APPLY_ADM.DROP_APPLY( apply_name IN VARCHAR2, drop_unused_rule_sets IN BOOLEAN DEFAULT FALSE); Parameters APPLY_NAME: The name of the apply process being dropped. You must specify an existing apply process name. Do not specify an owner. DROP_UNUSED_RULE_SETS: If TRUE, the procedure drops any rule sets, positive and negative, that are used by the specified apply process if these rule sets are not used by any other Oracle Streams client. Oracle Streams clients include capture processes, propagations, apply processes, and messaging clients. If this procedure drops a rule set, it also drops any rules in the rule set that are not in another rule set. If FALSE, the procedure does not drop the rule sets used by the specified apply process, and the rule sets retain their rules. When you use this procedure to drop an apply process, information about the rules created for the apply process using the DBMS_STREAMS_ADM package is removed from the data dictionary views for Oracle Streams rules. Information about such a rule is removed even if the rule is not in either the positive or negative rule set for the apply process. The following are the data dictionary views for Oracle Streams rules: ALL_STREAMS_GLOBAL_RULES DBA_STREAMS_GLOBAL_RULES ALL_STREAMS_MESSAGE_RULES DBA_STREAMS_MESSAGE_RULES ALL_STREAMS_SCHEMA_RULES DBA_STREAMS_SCHEMA_RULES ALL_STREAMS_TABLE_RULES DBA_STREAMS_TABLE_RULES STOP_APPLY Procedure This procedure stops the apply process from applying messages and rolls back any unfinished transactions being applied. Syntax: DBMS_APPLY_ADM.STOP_APPLY( apply_name IN VARCHAR2, force IN BOOLEAN DEFAULT FALSE); Parameters: APPLY_NAME: The apply process name. A NULL setting is not allowed. Do not specify an owner. FORCE: If TRUE, the procedure stops the apply process as soon as possible. If FALSE, the procedure stops the apply process after ensuring that there are no gaps in the set of applied transactions. The behavior of the apply process depends on the setting specified for the force parameter and the setting specified for the commit_serialization apply process parameter. Oracle Database 11g: Implement Streams I - 282

283 Querying the Data Dictionary
ALL_APPLY ALL_APPLY_ENQUEUE ALL_APPLY_ERROR ALL_APPLY_EXECUTE ALL_APPLY_KEY_COLUMNS ALL_APPLY_PARAMETERS ALL_APPLY_PROGRESS DBA_APPLY_OBJECT_DEPENDENCIES DBA_APPLY_VALUE_DEPENDENCIES DBA_APPLY_SPILL_TXN Querying the Data Dictionary You can use the following static data dictionary views to obtain information about the configuration of apply. Each view is available with the prefixes ALL_ and DBA_, unless noted otherwise. ALL_APPLY displays general information about the apply processes that dequeue messages from the queues that are accessible to the current user. ALL_APPLY_ENQUEUE displays information about the apply enqueue actions for the rules where the destination queue exists and is accessible to the current user. ALL_APPLY_ERROR displays information about the error transactions generated by the apply processes that dequeue messages from the queues that are accessible to the current user. ALL_APPLY_EXECUTE displays information about the apply execute actions for the rules that are visible to the current user. ALL_APPLY_KEY_COLUMNS displays information about alternative key columns for tables that are accessible to the current user. ALL_APPLY_PARAMETERS displays information about the parameters for the apply processes that dequeue messages from the queues that are accessible to the current user. Oracle Database 11g: Implement Streams I - 283

284 Oracle Database 11g: Implement Streams I - 284
Querying the Data Dictionary (continued) ALL_APPLY_PROGRESS displays information about the progress made by the apply processes that dequeue messages from the queues that are accessible to the current user. This view contains information only about the captured messages. It does not contain information about the user-enqueued messages. DBA_APPLY_OBJECT_DEPENDENCIES displays information about object dependencies for all apply processes in the database. DBA_APPLY_VALUE_DEPENDENCIES displays information about the value dependencies for all apply processes in the database. The DBA_APPLY_SPILL_TXN data dictionary view displays information about the transactions spilled by all apply processes in the database. With the following views, you can monitor the rules used by the apply process: [ALL | DBA ]_STREAMS_GLOBAL_RULES [ALL | DBA ]_STREAMS_SCHEMA_RULES [ALL | DBA ]_STREAMS_TABLE_RULES [ALL | DBA ]_STREAMS_RULES With the following views, you can view the instantiation information for objects to which the captured messages are applied: DBA_APPLY_INSTANTIATED_GLOBAL DBA_APPLY_INSTANTIATED_OBJECTS DBA_APPLY_INSTANTIATED_SCHEMAS Oracle Database 11g: Implement Streams I - 284

285 Managing the Apply Process
When you click the Apply tab under Streams Management, the Apply administration page appears. On this page, you can: View all apply processes for a database View the rule sets used by each apply process. Determine which queue an apply process is associated with. Determine the source database for each apply process. View the current state and the status of each apply process. Determine whether the apply process processes captured or user-enqueued messages. Determine whether there are any handlers associated with the apply process. View the apply tag used by each apply process. Manage the rules for an apply process by clicking the appropriate rule set name link Edit the apply process parameters View the statistics and errors for an apply process Start, stop, or delete an apply process To view configuration details, select an apply process and click the View button (or click the apply process name link). Oracle Database 11g: Implement Streams I - 285

286 Checking the Apply Process
Verify the apply process configuration: SELECT apply_name, apply_captured CAP, status, status_change_time, rule_set_name, negative_rule_set_name FROM DBA_APPLY; APPLY_NAME CAP STATUS STATUS_CH RULE_SET_NAME NEGATIVE_RULE_SET_NAME APPLY_SITE1_LCRS YES ENABLED STREAM1_RS STREAM1_NEG_RS Checking the Apply Process The apply process STATUS should be ENABLED. Other possible values for the STATUS column include: ENABLED DISABLED ABORTED The APPLY_CAPTURED column has a value of YES if the apply process applies implicitly captured DML or DDL messages. If the APPLY_CAPTURED column has a value of NO, the apply process handles messages that are explicitly enqueued into the Streams queue either locally or remotely. Oracle Database 11g: Implement Streams I - 286

287 Oracle Database 11g: Implement Streams I - 287
Summary In this lesson, you should have learned how to: Describe the apply process architecture List which messages can and cannot be applied Configure one or more apply processes for a database Query the data dictionary for information about the apply process configuration Oracle Database 11g: Implement Streams I - 287

288 Practice 9 Overview: Creating an Apply Process
This practice covers the following topics: Creating an apply process by using the DBMS_STREAMS_ADM package Querying the data dictionary for apply process information Querying the rules and rule conditions that are used by the apply process Adding new rules to the apply process rule set by using ADD_TABLE_RULES Oracle Database 11g: Implement Streams I - 288

289 Practice 9 Overview: Creating an Apply Process
This practice covers the following topics (continued): Verifying that the capture process is current Querying propagation statistics Inserting new rows into a shared table at the source site Querying the data dictionary views to verify that the changes were captured and propagated Verifying that the DML changes were applied to the shared table at the destination site Verifying that the apply handler audited an order update Querying the error queue on the destination site Oracle Database 11g: Implement Streams I - 289

290 Result of Practices 6, 7, 8, and 9: Configuring Apply
AMER database EURO database STRM01_PROPAGATION OE tables OE tables Q_CAPTURE Q_APPLY in Q_APP_TAB STRM01_CAPTURE STRM01_APPLY Redo logs Result of Practices 6, 7, 8, and 9: Configuring Apply Because Practice 9 builds on the previous practices, you can view the combined result in the graphic in the slide. Oracle Database 11g: Implement Streams I - 290

291 Configuring Downstream Capture

292 Oracle Database 11g: Implement Streams I - 292
Objectives After completing this lesson, you should be able to: Describe downstream capture Configure downstream capture Test downstream capture Oracle Database 11g: Implement Streams I - 292

293 Oracle Database 11g: Implement Streams I - 293
Downstream Capture SGA Streams pool Capture LOGMNR Database link Staging queue Downstream Capture With downstream capture, the related database objects (such as the LogMiner session, queues, rules, and Streams processes) are created on the downstream database, which is different from the source database. To support downstream capture, the redo logs from the source site must be transported to the downstream database. Redo transport services use the log writer process (LGWR) at the source database to send redo data to the database that runs the downstream capture process. When you create a downstream capture process at the downstream database, the BUILD procedure of the DBMS_CAPTURE_ADM package is run at the source database to extract the source data dictionary information to local redo logs. After these logs have been copied to the downstream database, a LogMiner data dictionary can be created at the downstream database. If you create an additional downstream capture process for the same source database, it can use the same LogMiner data dictionary or create a new one. The redo logs generated at the source database can be transported to the downstream database through the Oracle Database log transport service or any other supported method of transporting redo log files, such as the DBMS_FILE_TRANSFER package or file transfer protocol (FTP). If you do not use the log transport service, you must add the log files to the capture process explicitly by using the ALTER DATABASE REGISTER LOGICAL LOGFILE '<FILENAME>' FOR '<CAPTURE NAME>' command. Log transport by LGWR, FTP or DBMS_FILE_TRANSFER Oracle Database 11g: Implement Streams I - 293

294 Oracle Database 11g: Implement Streams I - 294
Downstream Capture (continued) Using a downstream capture process can provide the following benefits: It enables greatly reduced overhead for supporting Streams on the source database. It enables easier capture process administration (because you can locate capture processes with different source sites on the same database). Configuring multiple capture processes for the same source database at a downstream database provides additional scalability and flexibility in configuring your Streams environment. Shipping the redo logs to a downstream database provides additional protection against database failure and corruption. You can configure multiple capture processes at a downstream database to capture changes from a single-source database. You can also copy redo log files from multiple-source databases and configure multiple capture processes to capture changes in these redo log files at a single downstream database. In addition, a single database may have one or more capture processes that capture local changes and other capture processes that capture changes from a remote source database. That is, you can configure a single database to perform both local capture and downstream capture. Because the source and downstream databases use copies of the same archived log file, both databases must run on the same operating system and must have the same database version. If the downstream database propagates messages to another destination by using Oracle Streams, the other destination database does not need to have the same operating system or database version. Downstream capture can capture changes as long as the OS platform of the mining database is the same as the source (origin) database. The mining database release must be the same as, or later than, the source database. Both source and mining databases must be Oracle Database 10g or later. Oracle Database 11g: Implement Streams I - 294

295 Prerequisites for Downstream Capture
Configure a Streams administrator, with ARCHIVELOG mode and (optionally) a staging queue. Configure network connectivity. (Optional) Create a database link from the target back to the source database. Enable supplemental logging on the source database if you are not using a database link. Configure log transport services. Copy password file for database authentication. Add standby redo log at the downstream database. Prerequisites for Downstream Capture With downstream capture, the related database objects (such as the LogMiner session, queues, rules, and Oracle Streams processes) are created on the downstream database. To populate the Streams dictionary at the downstream database, the dictionary information must be obtained from the source database. Configuring supplemental logging and preparing for instantiation must also be performed at the source database. Creating a database link from the downstream capture site to the source database is optional. However, creating this database link greatly simplifies the configuration and management of the downstream database. Before creating a downstream capture process, make sure that you have performed the following configuration steps: Create the Streams administrator and grant the required privileges at both sites. Both databases must be in ARCHIVELOG mode. You may create a Streams queue with the DBMS_STREAMS_ADM.SET_UP_QUEUE procedure. Configure network connectivity between the source and downstream databases. Configure supplemental logging for tables that are being captured if you are not using a database link to the source database. If you are using the log transport services of the Oracle database, you must configure authentication at both databases. Oracle Database 11g: Implement Streams I - 295

296 Oracle Database 11g: Implement Streams I - 296
Prerequisites for Downstream Capture (continued) Redo transport sessions are authenticated by either the Secure Sockets Layer (SSL) protocol or by a remote login password file. If the source database has a remote login password file, copy it with the orapwd utility to the appropriate directory on the downstream capture database system. Example: cd $ORACLE_HOME/dbs orapwd file=orapweuro entries=10 ignorecase=y force=y Add a standby redo log at the downstream database. Oracle Database 11g: Implement Streams I - 296

297 Configuring Log Transport Services
Configure initialization parameters for both the source and destination databases: LOG_ARCHIVE_CONFIG LOG_ARCHIVE_DEST_STATE_n ALTER SYSTEM SET log_archive_config = 'SEND, RECEIVE, DG_CONFIG=(amer,euro)' SCOPE=BOTH; ALTER SYSTEM SET log_archive_dest_state_2 = 'ENABLE'; Configuring Log Transport Services Oracle Streams uses the existing database initialization parameters for configuring the transportation of redo logs from the source database to the downstream database. Configure the following parameters for both the source and destination database: LOG_ARCHIVE_CONFIG: Enables or disables the sending and receiving of redo logs. To use downstream capture and copy, specify DB_UNIQUE_NAME of the source database and the downstream database using the DG_CONFIG attribute, which also specifies the unique database names in a Data Guard configuration. LOG_ARCHIVE_DEST_STATE_n: Specifies the availability of the destination. It must be set to ENABLE for the corresponding LOG_ARCHIVE_DEST_n parameter Oracle Database 11g: Implement Streams I - 297

298 Configuring Log Transport Services at the Source Database
Use the LOG_ARCHIVE_DEST_n source database initialization parameter with the following attributes: ALTER SYSTEM SET log_archive_dest_2 = 'SERVICE=EURO ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=euro'; Configuring Log Transport Services at the Source Database Use one instance of the LOG_ARCHIVE_DEST_n parameter on the source database to configure the downstream database as a remote destination. Configure the following attributes: SERVICE:Specify the network service name of the downstream database. ASYNC or SYNC: Specify a redo transport mode. The LGWR process transmits the redo data asynchronously to the downstream capture site, resulting in little or no impact on the performance of the source database. ASYNC is recommended if the source database is Oracle Database 10g , Release 1 or later. The LGWR process transmits the redo data synchronously to the downstream capture site and waits for the I/O to complete before continuing. This prevents data loss because it ensures that the redo records are successfully transmitted to the downstream database before continuing. NOREGISTER: Specify this attribute so that the location of the archived redo log files is not recorded in the downstream database control file. VALID_FOR: At the source database, specify either (ONLINE_LOGFILE, PRIMARY_ROLE) or (ONLINE_LOGFILE, ALL_ROLES). DB_UNIQUE_NAME: The unique name of the downstream database, as specified for the DB_UNIQUE_NAME initialization parameter at the downstream database. Oracle Database 11g: Implement Streams I - 298

299 Configuring Downstream Initialization at the Destination Database
Configure the downstream database to receive redo data from the source database LGWR process and write the redo data to the standby redo log at the downstream database. ALTER SYSTEM SET log_archive_config = 'SEND, RECEIVE, DG_CONFIG=(amer,euro)' SCOPE=BOTH; ALTER SYSTEM SET log_archive_dest_2 = 'LOCATION=/home/oracle/EURO VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'; ALTER SYSTEM SET log_archive_dest_state_2 = 'ENABLE'; Note: You probably also need to enlarge the SGA of the destination database for this new capture process. Configuring Downstream Initialization at the Destination Database As mentioned earlier in this lesson, the LOG_ARCHIVE_CONFIG and the LOG_ARCHIVE_DEST_STATE_n initialization parameters can be configured in the same way for the source and destination databases. But the LOG_ARCHIVE_DEST_n destination parameter should be defined as follows: LOCATION: Specify a valid path name for a disk directory on the system that hosts the downstream database. Each destination that specifies the LOCATION attribute must identify a unique directory path name. This is the local destination for the archived redo log files that are generated from the standby redo logs. Log files from a remote source database must be kept separate from local database redo log files. In addition, if the downstream database contains log files from multiple source databases, the log files from each source database must be kept separate. VALID_FOR: Specify either (STANDBY_LOGFILE, PRIMARY_ROLE) or (STANDBY_LOGFILE, ALL_ROLES). You must ensure that log files are not deleted or removed until after they have been consumed at the downstream database. You must also be able to restore these log files if recovery is needed when a capture process is restarted. For more information about configuring log transport services, refer to the Oracle Database Reference or Oracle Data Guard Concepts and Administration. Oracle Database 11g: Implement Streams I - 299

300 Configuring Standby Logs at the Destination Database
1. Determine the source log file size. 2. Determine the number of standby log file groups. 3. Ensure access rights to the new standby log files. Redo log groups Standby log file groups AMER EURO Configuring Standby Logs at the Destination Database 1. Determine the log file size used in the source database. The standby log file size must be the same size as the source log file size. 2. Determine the number of standby log file groups. It must be at least one more than the number of online log file groups in the source database. 3. Ensure that the new standby log files have appropriate OS access rights. Oracle Database 11g: Implement Streams I - 300

301 Creating a Downstream Capture Process with a Database Link
3 Rules Capture 2 1 Database link Staging queue 1 1 Log transport Site1 Site2 Creating a Downstream Capture Process with a Database Link 1. Verify that the prerequisites listed in an earlier slide have been met, including the creation of a database link. 2. After completing the database configuration, connect to the downstream database as a user with Streams administrator privileges and run the CREATE_CAPTURE procedure. Here is an example: BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'STREAMS_QUEUE', capture_name => 'CAPTURE_DOWNSTREAM_SITE1', use_database_link => TRUE, source_database => 'SITE1.NET', logfile_assignment => 'implicit'); END; / A new capture process is created at the downstream database. Using a database link during the downstream capture process creation is optional. However, creating this database link greatly simplifies the configuration and management of the downstream database. Oracle Database 11g: Implement Streams I - 301

302 Oracle Database 11g: Implement Streams I - 302
Creating a Downstream Capture Process with a Database Link (continued) When you specify a database link for a downstream capture process, several steps are performed automatically for you: If this is the first capture process to be created for the source database, the database link is used during the capture creation process to run the DBMS_CAPTURE_ADM.BUILD procedure at the source database. When the downstream capture process is started, it reads the dictionary information from the transported redo logs and uses it to create a LogMiner dictionary at the downstream database. It obtains the first system change number (SCN) value from the BUILD procedure. It sets the first SCN value for the newly created downstream capture process. 3. Create rules at the downstream database and associate them with the newly created capture process. You can add rules using the ADD_*_RULES procedures in the DBMS_STREAMS_ADM package, or you can create them manually by using procedures in the DBMS_RULE_ADM package. It may not be feasible or desirable to configure a database link between the source and downstream databases because: The downstream database is outside a firewall The DBA does not want any inbound database link to the source database Usage Notes The database link from the downstream database to the source database cannot contain a connection qualifier. After the downstream capture process is created, changing the database name or the database ID (DBID) of the source database is not supported. Oracle Database 11g: Implement Streams I - 302

303 Creating a Downstream Capture Process Without a Database Link
SGA Shared pool Streams pool 4 Data dictionary cache Rules 2 Capture 3 Staging queue 1 1 Creating a Downstream Capture Process Without a Database Link If a database link does not exist, you must perform the following additional steps at the source database. (The CREATE_CAPTURE procedure at the downstream database is unable to perform these steps because there is no database link.) 1. Complete all prerequisite steps before creating the downstream capture process. You must additionally configure supplemental logging for all replicated tables at the source database. 2. Determine the first SCN for the downstream capture process. If a capture process does not exist at the downstream database for the source database, you must run the BUILD procedure of DBMS_CAPTURE_ADM at the source site to dump the data dictionary information to the redo logs. Use the optional OUT parameter of the BUILD procedure to obtain the first SCN value: SQL> VARIABLE fscn NUMBER SQL> EXECUTE DBMS_CAPTURE_ADM.BUILD(:fscn); SQL> PRINT fscn If a capture process for the same source database exists at the downstream database, you can use the first SCN for the existing data dictionary build in the redo logs. Log transport Site1 Site2 Oracle Database 11g: Implement Streams I - 303

304 Oracle Database 11g: Implement Streams I - 304
Creating a Downstream Capture Process Without a Database Link (continued) Query the V$ARCHIVED_LOG view to determine a valid first SCN value: SELECT DISTINCT first_change# FROM V$ARCHIVED_LOG WHERE dictionary_begin = 'YES'; FIRST_CHANGE# 409391 If multiple values are returned for the query against V$ARCHIVED_LOG, record the highest value. 3. Because you are not using a database link for the downstream capture process, you must manually prepare for instantiation at the source database any objects for which capture rules will be created. Instantiation is covered in the lesson titled “Instantiation.” 4. Create the downstream capture process by executing the DBMS_CAPTURE_ADM.CREATE_CAPTURE procedure: Specify FALSE for the use_database_link argument. For the first_scn argument, specify the value obtained in step 2. If you are not using log transport services to transmit the redo log files to the downstream database, set the logfile_assignment parameter to EXPLICIT. Here is an example: BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'STREAMS_QUEUE', capture_name => 'CAPTURE_DOWNSTREAM_SITE1', use_database_link => FALSE, first_scn => , source_database => 'SITE1.NET', logfile_assignment => 'explicit'); END; / If the logfile_assignment parameter is set to explicit, log transport services cannot be used to add redo log files to the capture process. In this case, you must transfer the redo log files manually to the site running the downstream database by using the DBMS_FILE_TRANSFER package, FTP, or some other transfer method, and then manually add the redo log files to the LogMiner session used by the capture process by using the following DDL statement: ALTER DATABASE REGISTER LOGICAL LOGFILE 'file_name' FOR 'capture_process_name'; 5. Create rules and rule sets for the capture process and assign the rule set to the capture process as a positive or negative rule set. You use the ADD_*_RULES procedures of the DBMS_STREAMS_ADM package to generate system-created rules and rule sets. You can also create rules by using the DBMS_RULE_ADM package. Oracle Database 11g: Implement Streams I - 304

305 Creating Downstream Processes with the MAINTAIN Procedure
DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS( schema_names => 'HR', source_directory_object => 'SRC_EXP_DIR', destination_directory_object => 'DST_EXP_DIR', source_database => 'AMER.US.ORACLE.COM', destination_database => 'EURO.US.ORACLE.COM', capture_name => 'DSTRM02_CAPTURE', capture_queue_table => 'DSTRM02_QUEUE_TABLE', capture_queue_name => 'DSTRM02_QUEUE', apply_name => 'DSTRM02_APPLY', apply_queue_table => 'DSTRM02_QUEUE_TABLE', apply_queue_name => 'DSTRM02_QUEUE', bi_directional => FALSE, include_ddl => TRUE); Creating Downstream Processes with the MAINTAIN Procedure You can use the MAINTAIN_* procedures in the DBMS_STREAMS_ADM package to create all Streams processes required for downstream capture. Because it is more efficient, use the same DSTRM02_QUEUE queue in the downstream capture database. This avoids propagating changes between queues. Additionally, use the same DSTRM02_QUEUE_TABLE queue table for the capture and apply processes. Oracle Database 11g: Implement Streams I - 305

306 Real-Time Downstream Capture
Enable real-time capture on the destination database: Start real-time mining on the source database: DBMS_CAPTURE_ADM.SET_PARAMETER( capture_name => 'DSTRM02_CAPTURE', parameter => 'downstream_real_time_mine', value => 'Y'); ALTER SYSTEM SET ARCHIVE LOG CURRENT; Real-Time Downstream Capture If you want to configure real-time capture, set DOWNSTREAM_REAL_TIME_MINE to ‘Y’ and start the mining with switching archive logs. Oracle Database 11g: Implement Streams I - 306

307 Monitoring Downstream Capture Processes
SELECT capture_name, queue_name, status, source_database, capture_type FROM DBA_CAPTURE; CAPTURE_NAME QUEUE_NAME STATUS SOURCE_DATABASE CAPTURE_TYPE CAPTURE_DOWNSTREAM_SITE STREAMS_QUEUE DISABLED SITE1.NET DOWNSTREAM Monitoring Downstream Capture Processes To view general information about all downstream capture processes that run on a database, query the DBA_CAPTURE dictionary view at the downstream database. The capture_type column has a value of DOWNSTREAM for downstream capture processes and a value of LOCAL otherwise. For downstream capture processes, the value for source_database is not the same as the local database name. Oracle Database 11g: Implement Streams I - 307

308 Monitoring Log File Availability
SELECT r.CONSUMER_NAME, r.SOURCE_DATABASE SRC_DB, r.SEQUENCE#, r.NAME, r.DICTIONARY_BEGIN BEGIN, r.DICTIONARY_END END FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c WHERE r.CONSUMER_NAME = c.CAPTURE_NAME; CONSUMER_NAME SRC_DB SEQUENCE# NAME BEGIN END CAPTURE_DOWNSTREAM_SITE1 SITE1.NET /oracle/arch/SITE1_1_95_ arc YES YES CAPTURE_DOWNSTREAM_SITE1 SITE1.NET /oracle/arch/SITE1_1_96_ arc NO NO Monitoring Log File Availability You can query the DBA_REGISTERED_ARCHIVED_LOG view to display information about the archived redo log files that are registered for each capture process in a database. A redo log file is said to be registered when it has been assigned to a capture process, either implicitly or explicitly. The query shown in the slide displays information about the redo log files for both the local and downstream capture processes. Oracle Database 11g: Implement Streams I - 308

309 Oracle Database 11g: Implement Streams I - 309
Summary In this lesson, you should have learned how to: Describe downstream capture Configure downstream capture Test downstream capture Oracle Database 11g: Implement Streams I - 309

310 Oracle Database 11g: Implement Streams I - 310
Practice 10 Overview: Configuring Real-Time Downstream Capture of the HR Schema This practice covers the following topics: Configuring source and destination databases so that archived logs are automatically transported to the downstream capture site Configuring real-time, one-way replication of the HR schema Verifying the replication by inserting a test row, and then deleting it Removing the downstream replication processes Oracle Database 11g: Implement Streams I - 310

311 Result of Practice 10: Schema Replication with Downstream Capture
AMER database EURO database DSTRM02_ CAPTURE DSTRM02_ APPLY Standby log file groups DSTRM02_QUEUE_TABLE used for capture and apply (an optimization) Redo log groups Oracle Database 11g: Implement Streams I - 311

312 Configuring Synchronous Capture

313 Oracle Database 11g: Implement Streams I - 313
Objectives After completing this lesson, you should be able to: Configure synchronous capture Monitor synchronous capture Manage synchronous capture Oracle Database 11g: Implement Streams I - 313

314 When to Use Synchronous Capture
Use synchronous capture when: Redo-based capture is not possible (such as the Standard Edition of the database) Redo or LogMiner-related capture cannot be used but other Streams background processes can be utilized Capture needs to occur at the same time (synchronous) to user transaction Changes should be stored in a queue on disk (persistently) Clone data of a table with few updates Source Propagate messages Destination Queue LCR User msg . . . Queue LCR User msg . . . Enqueue LCRs Dequeue LCRs Synchronous capture Apply process Capture changes Apply changes Oracle LogMiner Database objects Database objects User changes User When to Use Synchronous Capture For environments that do not support redo-based capture (such as the Standard Edition of the database), it is possible to capture changes as part of the user transaction activity. This form of capture is known as synchronous capture because the capture is simultaneous with (at the same time as) the user transaction. It uses an efficient internal mechanism to capture the changes and store them persistently (that is, on disk) in a queue. Oracle Database 11g: Implement Streams I - 314

315 Oracle Database 11g: Implement Streams I - 315
Synchronous Capture Captures the DML changes during transaction processing instead of from the redo logs Uses a single persistent ANYDATA queue to store the DML changes for specified tables For each row change: Captures all columns of the changed row Converts change into a row LCR Enqueues LCR into a single persistent queue Enqueues a message containing the row LCR into a queue Creates LCRs that include unmodified columns Synchronous Capture Synchronous capture is a new Streams feature that captures DML changes during the execution of a transaction. The row LCRs are created as the row changes occur. The changes are visible after the commit happens. Change records are written to the queue that is stored persistently on disk. The changes use the standard Streams processing from this point on. When synchronous capture is configured to capture changes to tables, the database that contains these tables is called the source database. When a DML change is made to a table, it can result in changes to one or more rows in the table. Synchronous capture captures each row change and converts it into a specific message format called a row LCR. After capturing a row LCR, synchronous capture enqueues a message containing the row LCR into a queue. Row LCRs created by synchronous capture always contain values for all columns of the row, even if some of the columns were not modified by the change. Synchronous capture is always associated with a single ANYDATA queue, and it enqueues messages only into this queue. Use the DBMS_STREAMS_ADM.SET_UP_QUEUE()procedure to create the requisite commit-time queue for synchronous capture. The queue used by synchronous capture must be a commit-time queue. Commit-time queues ensure that messages are grouped into transactions, and that the transaction groups are in the commit system change number (CSCN) order. Synchronous capture always enqueues row LCRs into the persistent queue. Persistent queue is that portion of a queue that stores messages only on hard disk in a queue table, and not in memory. Oracle Database 11g: Implement Streams I - 315

316 Oracle Database 11g: Implement Streams I - 316
Synchronous Capture (continued) You can create multiple queues and associate a different synchronous capture with each queue. Although synchronous capture must enqueue messages into a commit-time queue, messages captured by synchronous capture can be propagated to queues that are not commit-time queues. Therefore, any intermediate queues that store the messages captured by synchronous capture do not need to be a commit-time queue. Also, apply processes that apply messages captured by synchronous capture can use queues that are not commit-time queues. Note: Synchronous capture does not capture certain types of changes and changes to certain data types in table columns. Also, synchronous capture never captures changes in the SYS, SYSTEM, or CTXSYS schemas. Synchronous capture captures only DML statements (Insert/Update/Delete). DDL statements also cannot be captured with synchronous capture. Note: You cannot use ADD_SCHEMA_RULES to create synchronous capture. Oracle Database 11g: Implement Streams I - 316

317 Oracle Database 11g: Implement Streams I - 317
Captured Data Types Supported Data Types Unsupported Data Types VARCHAR2, NVARCHAR2 LONG RAW LONG RAW NUMBER CLOB FLOAT NCLOB DATE BLOB BINARY_FLOAT, BINARY_DOUBLE BFILE CHAR, NCHAR ROWID TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE User-defined types INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND Oracle-supplied special types Data Types Captured by Synchronous Capture When capturing the row changes resulting from the DML changes made to tables, synchronous capture can capture changes made to columns of the following data types: VARCHAR2 NVARCHAR2 NUMBER FLOAT DATE BINARY_FLOAT BINARY_DOUBLE TIMESTAMP TIMESTAMP WITH TIME ZONE TIMESTAMP WITH LOCAL TIME ZONE INTERVAL YEAR TO MONTH INTERVAL DAY TO SECOND RAW CHAR NCHAR Oracle Database 11g: Implement Streams I - 317

318 Oracle Database 11g: Implement Streams I - 318
Data Types Captured by Synchronous Capture (continued) Synchronous capture does not capture the results of the DML changes made to columns of the following data types: LONG LONG RAW CLOB NCLOB BLOB BFILE ROWID User-defined types (including object types, REFs, varrays, and nested tables) Oracle-supplied types (including ANY types, XML types, spatial types, and media types) Synchronous capture raises an error if it tries to create a row LCR for a DML change to a table that contains a column of an unsupported data type. When synchronous capture raises an error, it writes the LCR that caused the error into its trace file, raises an ORA error, and is disabled. In this case, modify the rules used by synchronous capture to avoid the error. Oracle Database 11g: Implement Streams I - 318

319 Oracle Database 11g: Implement Streams I - 319
Captured Changes Synchronous capture captures the DML changes made using the following SQL statements: INSERT UPDATE DELETE MERGE Captured Changes Synchronous capture can capture only certain types of changes made to a database and its objects. When you specify that DML changes made to specific tables must be captured, synchronous capture captures the following types of DML changes made to these tables: INSERT UPDATE DELETE MERGE The following are considerations for capturing DML changes with synchronous capture: Synchronous capture converts each MERGE change into an INSERT or UPDATE change. MERGE is not a valid command type in a row LCR. Synchronous capture can capture changes made to an index-organized table only if the index-organized table does not contain any columns of the following data types: LONG LONG RAW CLOB NCLOB BLOB BFILE ROWID Oracle Database 11g: Implement Streams I - 319

320 Oracle Database 11g: Implement Streams I - 320
Captured Changes (continued) User-defined types (including object types, REFs, varrays, and nested tables) Oracle-supplied types (including ANY types, XML types, spatial types, and media types) If an index-organized table contains a column of one of these unsupported data types, synchronous capture raises an error when a user makes a change to the index-organized table and the change satisfies the synchronous capture rule set. Synchronous capture ignores the CALL, EXPLAIN PLAN, or LOCK TABLE statements. Synchronous capture cannot capture DML changes made to temporary tables or object tables. Synchronous capture raises an error if it attempts to capture such changes. If you share a sequence in multiple databases, the sequence values used for individual rows in these databases may vary. Also, changes to the actual sequence values are not captured by synchronous capture. For example, if a user references NEXTVAL or sets the sequence, synchronous capture does not capture changes to the resulting sequence. The following types of changes are ignored by synchronous capture: DDL changes The ALTER SESSION and SET ROLE session control statements The ALTER SYSTEM system control statement Changes made by direct path loads Invocations of PL/SQL procedures, which means that a call to a PL/SQL procedure is not captured. However, if a call to a PL/SQL procedure causes changes to database objects, these changes can be captured by synchronous capture if the changes satisfy the synchronous capture rule set. Changes made to a table or schema by online redefinition using the DBMS_REDEFINITION package. Online table redefinition is supported on a table for which synchronous capture captures changes, but the logical structure of the table must remain the same. Oracle Database 11g: Implement Streams I - 320

321 Configuring Synchronous Capture
You can use one of the following to configure synchronous capture: DBMS_STREAMS_ADM ADD_TABLE_RULES ADD_SUBSET_RULES DBMS_CAPTURE_ADM.CREATE_SYNC_CAPTURE Configuring Synchronous Capture You can use any of the following procedures to create a synchronous capture: DBMS_STREAMS_ADM.ADD_TABLE_RULES DBMS_STREAMS_ADM.ADD_SUBSET_RULES DBMS_CAPTURE_ADM.CREATE_SYNC_CAPTURE Both the procedures in the DBMS_STREAMS_ADM package create a synchronous capture with the specified name if it does not exist, create a positive rule set for the synchronous capture if it does not exist, and add table rules or subset rules to the rule set. The CREATE_SYNC_CAPTURE procedure creates a synchronous capture, but does not create a rule set or rules for the synchronous capture. However, the CREATE_SYNC_CAPTURE procedure enables you to specify an existing rule set to associate with the synchronous capture, and it enables you to specify a capture user other than the default capture user. The DBMS_STREAMS_ADM procedures can also be used with a SYNC CAPTURE explicitly created with CREATE_SYNC_CAPTURE to configure additional tables. Note: Synchronous capture must be used only for very specialized situations, such as tables with very few changes, or when using the standard edition of Oracle Database instead of the enterprise edition. Oracle Database 11g: Implement Streams I - 321

322 Configuring Oracle Streams to Use Synchronous Capture
Verify initialization parameters. Create a Streams administrator (strmadmin) on each database. Use DBMS_STREAMS_ADM.SET_UP_QUEUE to create an ANYDATA queue to associate with the synchronous capture. Create all propagations that will propagate LCRs. Create all apply processes that will dequeue LCRs. Configure each apply process to apply user-enqueued messages. Configuring Oracle Streams to Use Synchronous Capture The following tasks must be completed before you configure a synchronous capture: 1. Ensure that the initialization parameters are set properly on any database that will use synchronous capture. 2. Create a Streams administrator (strmadmin) on each database involved in the Streams configuration. 3. Use DBMS_STREAMS_ADM.SET_UP_QUEUE to create an ANYDATA queue to associate with the synchronous capture. The queue must be a commit-time queue. Create the queue in the same database that will run the synchronous capture. 4. Create all propagations that will propagate the logical change records (LCRs) captured by the new synchronous capture. 5. Create all apply processes that will dequeue the LCRs captured by the new synchronous capture. 6. Configure each apply process to apply user-enqueued messages. Do not start the apply process until after the instantiation is performed. Oracle Database 11g: Implement Streams I - 322

323 Configuring a Synchronous Capture
Instantiate the tables. Start the apply processes. Configuring a Synchronous Capture If you plan to configure propagations and apply processes that process the LCRs captured by the new synchronous capture, perform the configuration in the following order: 1. Instantiate the tables for which the new synchronous capture captures changes at all destination databases. 2. Start the apply processes that will process the LCRs captured by the synchronous capture. Note: The Maintain_* procedures cannot be used to create a synchronous capture process. Synchronous capture is used only for specific cases, such as for a small table with few transactions, or when the database is not an enterprise edition. Oracle Database 11g: Implement Streams I - 323

324 Queues and Apply for Synchronous Capture
Set up a queue. Set up the apply process. Setting Up a Queue for Synchronous Capture You can use a single procedure, the DBMS_STREAMS_ADM.SET_UP_QUEUE procedure, to create an ANYDATA queue and the queue table used by the queue. Prerequisite: The specified queue table must not yet exist. Example: BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE (queue_table => 'strmadmin.streams_queue_table', queue_name => 'strmadmin.streams_queue', queue_user => 'hr'); END; Creating an Apply Process for Synchronous Capture The DBMS_APPLY_ADM .CREATE_APPLY procedure creates an apply process, but does not create a rule set or rules for the apply process. However, the CREATE_APPLY procedure enables you to specify an existing rule set for the apply process, either as a positive or a negative rule set, and a number of other options, such as apply handlers, an apply user, an apply tag, and whether to apply captured messages or user-enqueued messages. First, use the DBMS_APPLY_ADM .CREATE_APPLY procedure with apply_captured=>FALSE. Thereafter, use the procedures in the DBMS_STREAMS_ADM package to configure the apply for additional tables. Oracle Database 11g: Implement Streams I - 324

325 Oracle Database 11g: Implement Streams I - 325
Configuring a Synchronous Capture by Using the DBMS_STREAMS_ADM Package BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.employees', streams_type => 'sync_capture', streams_name => 'sync01_capture', queue_name => 'strmadmin.streams_queue', include_tagged_lcr => false); END; / Configuring a Synchronous Capture by Using the DBMS_STREAMS_ADM Package The following example uses the ADD_TABLE_RULES procedure in the DBMS_STREAMS_ADM package to configure a synchronous capture. This example assumes the following: The source database is amer. The synchronous capture that will be created uses streams_queue. The synchronous capture that will be created captures the results of all data manipulation language (DML) changes made to the hr.employees table. Perform the following steps: 1. Complete the configuration tasks, described earlier in this lesson. See also the “Preparing to Configure a Synchronous Capture” section in the Oracle Streams Concepts and Administration Guide. 2. Connect to the amer database as the Streams administrator. 3. Run the ADD_TABLE_RULES procedure (as shown in the slide) to configure the synchronous capture. Oracle Database 11g: Implement Streams I - 325

326 Oracle Database 11g: Implement Streams I - 326
Configuring a Synchronous Capture by Using the DBMS_STREAMS_ADM Package (continued) Running this procedure performs the following actions: Creates a synchronous capture named sync01_capture in the source database amer. Note that a synchronous capture with the same name must not exist. Enables the synchronous capture. A synchronous capture cannot be disabled. Associates the synchronous capture with an existing queue in amer named streams_queue Creates a positive rule set in amer for the sync01_capture synchronous capture. The rule set has a system-generated name. Creates a rule that captures DML changes to the hr.employees table, and adds the rule to the positive rule set for the synchronous capture. The rule has a system-generated name. Specifies that the synchronous capture captures a change only if the session that makes the change has a NULL tag because the INCLUDE_TAGGED_LCR parameter is set to FALSE. This behavior is accomplished through the system-created rules for the synchronous capture. Configures the Streams administrator as the capture user because the Streams administrator ran the ADD_TABLE_RULES procedure Prepares the hr.employees table for instantiation by running the DBMS_CAPTURE_ADM.PREPARE_SYNC_INSTANTIATION function for the table automatically If necessary, complete the steps described in the “After Configuring a Synchronous Capture” section in the Oracle Streams Concepts and Administration Guide. Note: You cannot use the MAINTAIN_* scripts to create a synchronous capture. Oracle Database 11g: Implement Streams I - 326

327 Oracle Database 11g: Implement Streams I - 327
Configuring a Synchronous Capture by Using the DBMS_CAPTURE_ADM Package BEGIN DBMS_CAPTURE_ADM.CREATE_SYNC_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'sync02_capture', rule_set_name => 'strmadmin.sync02_rule_set', capture_user => 'hr'); END; Create synchronous capture BEGIN DBMS_STREAMS_ADM.ADD_SUBSET_RULES( table_name => 'hr.departments', dml_condition => 'department_id=1700', streams_type => 'sync_capture', streams_name => 'sync02_capture', queue_name => 'streams_queue', include_tagged_lcr => false); END; / Add subset rules to synchronous capture. Configuring a Synchronous Capture by Using the DBMS_CAPTURE_ADM Package (continued) This section contains an example that runs the procedures in the DBMS_CAPTURE_ADM package and the DBMS_STREAMS_ADM package to configure a synchronous capture. This example assumes the following: The source database is amer. The synchronous capture that will be created uses streams_queue. The synchronous capture that will be created uses an existing rule set named sync02_rule_set in the strmadmin schema. The synchronous capture that will be created captures the results of a subset of the DML changes made to the hr.departments table. The capture user for the synchronous capture that is created is hr. The hr user must have privileges to enqueue into streams_queue. Oracle Database 11g: Implement Streams I - 327

328 Oracle Database 11g: Implement Streams I - 328
Configuring a Synchronous Capture by Using the DBMS_CAPTURE_ADM Package (continued) Perform the following steps: 1. Complete the configuration preparation tasks. See also the “Preparing to Configure a Synchronous Capture “ section in the Oracle Streams Concepts and Administration Guide. 2. Connect to the amer database as the Streams administrator. 3. Run the CREATE_SYNC_CAPTURE procedure to create a synchronous capture. BEGIN DBMS_CAPTURE_ADM.CREATE_SYNC_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'sync02_capture', rule_set_name => 'strmadmin.sync02_rule_set', capture_user => 'hr'); END; / Running this procedure performs the following actions: Creates a synchronous capture named sync02_capture. Note that a synchronous capture with the same name must not exist. Enables the synchronous capture. A synchronous capture cannot be disabled. Associates the synchronous capture with an existing queue named streams_queue Associates the synchronous capture with an existing rule set named sync02_rule_set in the strmadmin schema Configures hr as the capture user for the synchronous capture 4. Run the ADD_SUBSET_RULES procedure to instruct the synchronous capture to capture a subset of the DML changes to the hr.departments table: DBMS_STREAMS_ADM.ADD_SUBSET_RULES( table_name => 'hr.departments', dml_condition => 'department_id=1700', streams_type => 'sync_capture', streams_name => 'sync02_capture', queue_name => 'streams_queue', include_tagged_lcr => false); Oracle Database 11g: Implement Streams I - 328

329 Setting the Capture User for a Synchronous Capture
You can set the capture user for a synchronous capture by using the DBMS_CAPTURE_ADM.ALTER_SYNC_CAPTURE procedure. You need DBA privileges to call the ALTER_SYNC_CAPTURE procedure: BEGIN DBMS_CAPTURE_ADM.ALTER_SYNC_CAPTURE ( capture_name => 'sync_capture', capture_user => 'hr' ); END; / Setting the Capture User for a Synchronous Capture The capture user is the user who captures all DML changes that satisfy the synchronous capture rule set. Set the capture user for a synchronous capture by using the capture_user parameter in the ALTER_SYNC_CAPTURE procedure in the DBMS_CAPTURE_ADM package. To change the capture user, the user who invokes the ALTER_SYNC_CAPTURE procedure must be granted the DBA role. Only the SYS user can set capture_user to SYS. The procedure shown in the example sets the capture user for a synchronous capture named sync_capture to hr. Running this procedure grants the new capture user the enqueue privilege on the queue used by the synchronous capture and configures the user as a secure queue user of the queue. In addition, ensure that the capture user has the following privileges: The EXECUTE privilege on the rule set used by the synchronous capture The EXECUTE privilege on all custom rule-based transformation functions used in the rule set These privileges must be granted directly to the capture user. They cannot be granted through roles. Oracle Database 11g: Implement Streams I - 329

330 Displaying the Tables for Which Synchronous Capture Captures Changes
SELECT r.STREAMS_NAME, r.RULE_NAME, r.SUBSETTING_OPERATION, t.TABLE_OWNER, t.TABLE_NAME, t.ENABLED FROM DBA_STREAMS_RULES r, DBA_SYNC_CAPTURE_TABLES t WHERE r.STREAMS_TYPE = 'SYNCHRONOUS CAPTURE' AND r.OBJECT_NAME = t.TABLE_NAME; Synchronous Subsetting Table Capture Name Rule Name DML Own Table Enabled SYNC01_CAPTURE EMPLOYEES HR EMPLOYEES YES SYNC02_CAPTURE DEPARTMENTS24 DELETE HR DEPARTMENTS YES SYNC02_CAPTURE DEPARTMENTS23 UPDATE HR DEPARTMENTS YES SYNC02_CAPTURE DEPARTMENTS22 INSERT HR DEPARTMENTS YES Displaying the Tables for Which Synchronous Capture Captures Changes The DBA_SYNC_CAPTURE_TABLES view displays the tables whose DML changes are captured by a synchronous capture in the local database. The DBA_STREAMS_TABLE_RULES view displays each synchronous capture name and the rules used by each synchronous capture. You can view the following information by executing the query shown in the slide: The name of each synchronous capture The name of each rule used by the synchronous capture If the rule is a subset rule, the type of subsetting operation covered by the rule The owner of each table specified in each rule The name of each table specified in each rule Whether synchronous capture is enabled or disabled for the table. If synchronous capture is enabled for a table, it captures the DML changes made to the table. If synchronous capture is not enabled for a table, it does not capture the DML changes made to the table. Oracle Database 11g: Implement Streams I - 330

331 Oracle Database 11g: Implement Streams I - 331
Displaying the Tables for Which Synchronous Capture Captures Changes (continued) To display this information, run the following query: COLUMN STREAMS_NAME HEADING 'Synchronous|Capture Name' FORMAT A15 COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15 COLUMN SUBSETTING_OPERATION HEADING 'Subsetting|Operation' FORMAT A10 COLUMN TABLE_OWNER HEADING 'Table|Owner' FORMAT A10 COLUMN TABLE_NAME HEADING 'Table Name' FORMAT A15 COLUMN ENABLED HEADING 'Enabled?' FORMAT A8 SELECT r.STREAMS_NAME, r.RULE_NAME, r.SUBSETTING_OPERATION, t.TABLE_OWNER, t.TABLE_NAME, t.ENABLED FROM DBA_STREAMS_TABLE_RULES r, DBA_SYNC_CAPTURE_TABLES t WHERE r.STREAMS_TYPE = 'SYNCHRONOUS CAPTURE' AND r.TABLE_OWNER = t.TABLE_OWNER AND r.TABLE_NAME = t.TABLE_NAME; Your output looks similar to the following: Synchronous Subsetting Table Capture Name Rule Name Operation Owner TableName Enabled? SYNC01_CAPTURE EMPLOYEES HR EMPLOYEES YES SYNC02_CAPTURE DEPARTMENTS24 DELETE HR DEPARTMENTS YES SYNC02_CAPTURE DEPARTMENTS23 UPDATE HR DEPARTMENTS YES SYNC02_CAPTURE DEPARTMENTS22 INSERT HR DEPARTMENTS YES The preceding output indicates that the sync01_capture synchronous capture captures the DML changes made to the hr.employees table. The output also indicates that the sync02_capture synchronous capture captures a subset of the changes to the hr.departments table. If the ENABLED column shows NO for a table, synchronous capture does not capture changes to the table. The ENABLED column shows NO when a table rule is added to a synchronous capture rule set by a procedure other than ADD_TABLE_RULES or ADD_SUBSET_RULES in the DBMS_STREAMS_ADM package. For example, if the ADD_RULE procedure in the DBMS_RULE_ADM package adds a table rule to a synchronous capture rule set, the table appears when you query the DBA_SYNC_CAPTURE_TABLES view, but synchronous capture does not capture the DML changes to the table. No results appear in the DBA_SYNC_CAPTURE_TABLES view for schema and global rules. Oracle Database 11g: Implement Streams I - 331

332 Viewing the Extra Attributes Captured by Synchronous Capture
SELECT CAPTURE_NAME, ATTRIBUTE_NAME, INCLUDE FROM DBA_CAPTURE_EXTRA_ATTRIBUTES WHERE CAPTURE_TYPE = 'SYNC_CAPTURE' ORDER BY CAPTURE_NAME; Capture Process or Attribute Include Synchronous Capture Name Attribute in LCRs? SYNC_CAPTURE ROW_ID NO SYNC_CAPTURE SERIAL# NO SYNC_CAPTURE SESSION# NO SYNC_CAPTURE THREAD# NO Viewing the Extra Attributes Captured by Synchronous Capture You can use the INCLUDE_EXTRA_ATTRIBUTE procedure in the DBMS_CAPTURE_ADM package to instruct a capture process or synchronous capture to capture one or more extra attributes and include the extra attributes in the LCRs. The query displays the extra attributes included in the LCRs that are captured by each capture process and synchronous capture in the local database: COLUMN CAPTURE_NAME HEADING 'Capture Process or|Synchronous Capture' FORMAT A20 COLUMN ATTRIBUTE_NAME HEADING 'Attribute Name' FORMAT A15 COLUMN INCLUDE HEADING 'Include Attribute in LCRs?' FORMAT A30 SELECT CAPTURE_NAME, ATTRIBUTE_NAME, INCLUDE FROM DBA_CAPTURE_EXTRA_ATTRIBUTES WHERE CAPTURE_TYPE = 'SYNC_CAPTURE' ORDER BY CAPTURE_NAME; Oracle Database 11g: Implement Streams I - 332

333 Oracle Database 11g: Implement Streams I - 333
Viewing the Extra Attributes Captured by Synchronous Capture (continued) Your output looks similar to the following: Capture Process or Attribute Name Include Attribute in LCRs? Synchronous Capture SYNC_CAPTURE ROW_ID NO SYNC_CAPTURE SERIAL# NO SYNC_CAPTURE SESSION# NO SYNC_CAPTURE THREAD# NO SYNC_CAPTURE TX_NAME YES SYNC_CAPTURE USERNAME NO Based on this output, the capture process or the synchronous capture named sync_capture includes the transaction name (tx_name) in the LCRs that it captures, but this capture process or synchronous capture does not include any other extra attributes in the LCRs that it captures. To determine whether the name returned by the CAPTURE_NAME column is a capture process or a synchronous capture, query the DBA_CAPTURE and DBA_SYNC_CAPTURE views. Oracle Database 11g: Implement Streams I - 333

334 Synchronous Capture Rules
Captures changes based on user-defined rules Requires rules to be in a positive rule set Does not allow negative rule sets Uses rules created using the DBMS_STREAMS_ADM package procedures: ADD_TABLE_RULES ADD_SUBSET_RULES Ignores rules: Added by any other procedure Created by the DBMS_RULE_ADM package Synchronous Capture Rules Synchronous capture either captures or discards changes based on the rules that you define. Each rule specifies the database objects and types of changes for which the rule evaluates to TRUE. You must place these rules in a positive rule set. If such a rule evaluates to TRUE for a change, synchronous capture captures the change. You get an error if you try to specify a rule in the negative rule set. You can specify synchronous capture rules at the table level. A table rule captures or discards the row changes resulting from DML changes to a particular table. Subset rules are table rules that include a subset of the row changes to a particular table. Synchronous capture does not use schema or global rules. All synchronous capture rules must be created with the ADD_TABLE_RULES or ADD_SUBSET_RULES procedures in the DBMS_STREAMS_ADM package. Synchronous capture does not capture changes based on the following types of rules: Rules added to the synchronous capture rule set by a procedure other than ADD_TABLE_RULES or ADD_SUBSET_RULES in the DBMS_STREAMS_ADM package Rules created by the DBMS_RULE_ADM package If these types of rules are in a synchronous capture rule set, synchronous capture ignores them. Oracle Database 11g: Implement Streams I - 334

335 Oracle Database 11g: Implement Streams I - 335
Synchronous Capture Rules (continued) A synchronous capture can use a rule set created by the CREATE_RULE_SET procedure in the DBMS_RULE_ADM package. For example, the following procedure sets the positive rule set for a synchronous capture named sync_capture to sync_rule_set. BEGIN DBMS_CAPTURE_ADM.ALTER_SYNC_CAPTURE( capture_name => 'sync_capture', rule_set_name => 'strmadmin.sync_rule_set'); END; / You must add rules to the rule set with the ADD_TABLE_RULES or ADD_SUBSET_RULES procedure. Note: When a rule is in the rule set for a synchronous capture, do not change the following rule conditions: :dml.get_object_name and :dml.get_object_owner. Changing these conditions can cause the synchronous capture not to capture changes to the database object. However, you can change other conditions in synchronous capture rules. Oracle Database 11g: Implement Streams I - 335

336 Displaying the Queue and Rule Set of Each Synchronous Capture
SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, CAPTURE_USER FROM DBA_SYNC_CAPTURE; Synchronous Synchronous Capture Name Capture Queue Positive Rule Set Capture User SYNC01_CAPTURE STRM01_QUEUE RULESET$_ STRMADMIN SYNC02_CAPTURE STRM02_QUEUE SYNC02_RULE_SET HR Displaying the Queue and Rule Set of Each Synchronous Capture You can, by running the query shown in the slide, display the following information about each synchronous capture in a database: The synchronous capture name The name of the queue used by the synchronous capture The name of the positive rule set used by the synchronous capture The capture user for the synchronous capture COLUMN CAPTURE_NAME HEADING 'Synchronous|Capture Name' FORMAT A20 COLUMN QUEUE_NAME HEADING 'Synchronous|Capture Queue' FORMAT A20 COLUMN RULE_SET_NAME HEADING 'Positive Rule Set' FORMAT A20 COLUMN CAPTURE_USER HEADING 'Capture User' FORMAT A15 SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, CAPTURE_USER FROM DBA_SYNC_CAPTURE; Your output looks similar to the following: Synchronous Synchronous Capture Name Capture Queue Positive Rule Set Capture Usr SYNC01_CAPTURE STRM01_QUEUE RULESET$_ STRMADMIN SYNC02_CAPTURE STRM02_QUEUE SYNC02_RULE_SET HR Oracle Database 11g: Implement Streams I - 336

337 Dropping a Synchronous Capture
BEGIN DBMS_CAPTURE_ADM.DROP_CAPTURE( capture_name => 'sync_capture', drop_unused_rule_sets => true); END; / Dropping a Synchronous Capture You run the DROP_CAPTURE procedure in the DBMS_CAPTURE_ADM package to drop a synchronous capture. For example, the procedure shown in the slide drops a synchronous capture named sync_capture. Because the DROP_UNUSED_RULE_SETS parameter is set to true, this procedure also drops any rule sets used by the sync_capture synchronous capture, unless the rule set is used by another Streams client. If this procedure drops a rule set, it also drops any rules in the rule set that are not in another rule set. Oracle Database 11g: Implement Streams I - 337

338 Listing Objects That Are Not Compatible with Synchronous Capture
SELECT OWNER, TABLE_NAME, COLUMN_NAME, SYNC_CAPTURE_REASON FROM DBA_STREAMS_COLUMNS WHERE SYNC_CAPTURE_VERSION IS NULL; Listing Objects That Are Not Compatible with Synchronous Capture A database object or a column in a table is not compatible with synchronous capture if synchronous capture cannot capture changes to it. For example, synchronous capture cannot capture changes to object tables. Synchronous capture can capture changes to relational tables, but they cannot capture changes to the columns of some data types. The query displays: The object owner Database object name (usually table name) Column name of database objects and columns that are not compatible with synchronous captures The reason why the column is not compatible with synchronous captures When a query on the DBA_STREAMS_COLUMNS view returns NULL for SYNC_CAPTURE_VERSION, it means that synchronous capture does not support the column. The WHERE clause in the query ensures that the query returns only those columns that are not supported by synchronous capture. Oracle Database 11g: Implement Streams I - 338

339 Oracle Database 11g: Implement Streams I - 339
Listing Objects That Are Not Compatible with Synchronous Capture (continued) The following is a sample of the output from the query shown in the previous slide: Object Synchronous Owner Object Name Column Name Capture Reason OE WAREHOUSES WH_GEO_LOCATION ADT column OE LINEITEM_TABLE DESCRIPTION object table OE ACTION_TABLE DATE_ACTIONED object table OE CATEGORIES_TAB CATEGORY_ID object table SH DR$SUP_TEXT_IDX$I TOKEN_TEXT domain index SH DR$SUP_TEXT_IDX$I TOKEN_LAST domain index SH DR$SUP_TEXT_IDX$I TOKEN_INFO domain index SH DR$SUP_TEXT_IDX$N NLT_MARK domain index To avoid synchronous capture errors, configure the synchronous capture rule set to ensure that synchronous capture does not try to capture changes to an unsupported database object, such as an object table. To avoid synchronous capture errors while capturing changes to relational tables, you must configure the synchronous capture rule set to ensure that synchronous capture does not try to capture changes to a table that contains one or more unsupported columns. Oracle Database 11g: Implement Streams I - 339

340 Oracle Database 11g: Implement Streams I - 340
Summary In this lesson, you should have learned how to: Configure synchronous capture Monitor synchronous capture Manage synchronous capture Oracle Database 11g: Implement Streams I - 340

341 Practice 11 Overview: Configuring Synchronous Capture
This practice covers the following topics: Verifying initialization parameters Using DBMS_STREAMS_ADM.SET_UP_QUEUE to create an ANYDATA queue to associate with the synchronous capture Creating all propagations that will propagate LCRs Creating all apply processes that will dequeue LCRs Configuring each apply process to apply user-enqueued messages Instantiating the tables for which a new synchronous capture captures changes at all destination databases Starting apply processes that will process the LCRs captured by the synchronous capture Oracle Database 11g: Implement Streams I - 341

342 Practice 11 Result: Configuring Synchronous Capture
SYNC_CAP_QUEUE SYNC_APPLY_QUEUE SYNC_CAPTURE SYNC_PROP SYNC_APPLY HR schema SH.COUNTRIES table SH.COUNTRIES table HR schema AMER database EURO database Oracle Database 11g: Implement Streams I - 342

343 Transformations

344 Oracle Database 11g: Implement Streams I - 344
Objectives After completing this lesson, you should be able to: Describe a rule-based transformation Devise a plan for implementing rule-based transformations in your Streams environment Specify declarative rule-based transformations Implement a custom rule-based transformation Manage rule-based transformations Oracle Database 11g: Implement Streams I - 344

345 Rule-Based Transformations
A rule-based transformation is: Any modification to a message that results from the evaluation of a rule Performed when a rule evaluates to TRUE There are two types of rule-based transformations: Declarative Custom Custom rule-based transformations can be either: One-to-one One-to-many Rule-Based Transformations A rule-based transformation is any modification to a message that results from the evaluation of a rule. For example, you may use a rule-based transformation when you want to change the data type of a particular column in a table from NUMBER to VARCHAR2 at a particular database. In this case, the transformation takes as input a SYS.AnyData object containing a logical change record (LCR) with a NUMBER data type for a column and returns a SYS.AnyData object containing an LCR with a VARCHAR2 data type for the same column. Declarative rule-based transformations cover a set of common transformation scenarios for row LCRs. You specify (or declare) such a transformation procedure in the DBMS_STREAMS_ADM package. Declarative rule-based transformations run faster because the transformations are run internally without using PL/SQL. Custom rule-based transformations require a user-defined PL/SQL function to perform the transformation. The function takes as input an ANYDATA object containing a message and returns either of the following: An ANYDATA object containing the transformed message (one-to-one transformation) An array that contains zero or more ANYDATA encapsulations of a message (one-to-many transformation). However, one-to-many transformations are supported only for Streams capture processes. Oracle Database 11g: Implement Streams I - 345

346 Rule-Based Transformations
You can specify transformations for messages: Entering the staging area Leaving the staging area Being propagated between staging areas Transformation examples: Change format, data type, column name, table name Rule-Based Transformations (continued) A transformation is a change to the form of a Streams object. Examples include: Changing the data type for a particular column Renaming or removing a column Adding a column with either old or new column values Renaming the owner of a database object Changing the table name Splitting a column into several columns Combining several columns into one column Modifying the data or data format Note Rule-based transformations differ from transformations that are performed by using the DBMS_TRANSFORM package. (The DBMS_TRANSFORM package is used by Oracle Streams Advanced Queuing.) With a large number of transformations or complex transformations, it is better to modify the messages within a data manipulation language (DML) handler to take advantage of the apply process parallelism. A rule must be in a positive rule set for its rule-based transformation to be invoked. A rule-based transformation specified for a rule in a negative rule set is ignored. Oracle Database 11g: Implement Streams I - 346

347 Rule-Based Transformations: Example
APPLY: Change the table owner to HRRES Site1 Site2 CAPTURE: Remove the SALARY and COMMISSION columns PROPAGATE: Change the JOB_ID column data type Site3 Rule-Based Transformations: Example You can set rule-based transformations at different points in the Streams environment. To avoid the propagation of data (for example, the salary and commission column values) throughout the network, you can use a transformation on the capture process to remove these columns from the LCR. If the database schema architecture differs at two different sites, you can use a transformation on apply or propagate to modify the LCR or user-enqueued message to match the settings at the destination site. Oracle Database 11g: Implement Streams I - 347

348 Rule-Based Transformations and Capture
For capture, the PL/SQL function is called to transform the object before it is enqueued into the staging area. The advantages of performing transformations during capture include: Higher security by restricting staged data Reduced or eliminated storage space Less overhead because the transformations that are required by multiple locations need to be performed only once The disadvantage is that all sites receive the transformed message. Rule-Based Transformations and Capture The advantages of performing transformations during capture are the following: It can improve security if the transformation removes or changes private information because this information remains only at the source database and is not propagated. It may reduce space consumption, depending on the type of transformation performed. For example, a transformation that subsets data results in less data to enqueue, propagate, and apply. It reduces transformation overhead when there are multiple destinations for a message because the transformation is performed only once at the source (not multiple times). Oracle Database 11g: Implement Streams I - 348

349 Rule-Based Transformations and Propagation
For propagation, the PL/SQL function is called to transform the object while it is being dequeued and before it is propagated to the destination queue. The advantages of performing transformations during propagation include: Higher security by restricting staged data More flexibility in determining whether a site receives the original message or the transformed message The disadvantage is that the site that receives the transformed message can propagate only the transformed message to other sites. Rule-Based Transformations and Propagation The advantages of performing transformations during propagation are the following: It can improve security if the transformation removes or changes private information before the messages are propagated. Some destination queues can receive a transformed message, whereas other destination queues can receive the original message. The disadvantage of this approach is that, after a message is transformed, any database to which it is propagated after the first propagation receives the transformed message. Here is an example: 1. A message is captured on SITE1.NET. 2. During propagation to SITE2.NET, the message is transformed. 3. SITE2.NET then propagates the message to SITE3.NET. 4. SITE3.NET receives the transformed message and the information about the original message is lost to SITE3.NET. Oracle Database 11g: Implement Streams I - 349

350 Rule-Based Transformations and Apply
For apply, the PL/SQL function is called to transform the object while it is being dequeued and before it is applied. The advantage of performing transformations during apply is that each site can transform the original message in any manner as necessary. The disadvantages of performing transformations during apply are: More overhead and staging space if the same transformations must be performed at multiple sites All data, and possibly restricted information, is sent to all sites Rule-Based Transformations and Apply The advantage of performing transformations during apply is that any database to which the message is propagated after the first propagation can receive the message in its original form. Here is an example: 1. A message is captured at SITE1.NET and propagated to SITE2.NET. 2. The message is transformed and applied at SITE2.NET. 3. SITE2.NET propagates the message to SITE3.NET. 4. SITE3.NET receives the original message as long as the propagation does not use a transformation. SITE3.NET can then perform a different transformation on the data than was performed at SITE2.NET. The disadvantage of this approach is that security may be a concern if the messages contain private information because all databases to which the messages are propagated receive the original messages. Furthermore, if multiple sites need to perform the same transformation, the transformation procedure has to be executed at each site, resulting in an overhead and possibly requiring additional storage space. Oracle Database 11g: Implement Streams I - 350

351 Declarative LCR Transformations
ADD_COLUMN DELETE_COLUMN RENAME_COLUMN RENAME_TABLE Apply RENAME_SCHEMA Declarative LCR Transformations The declarative rule-based transformations introduced in Oracle Database 10g, Release 2 cover a set of common transformation scenarios for row LCRs, including: Renaming a schema or table Adding a column with a constant or system-generated default value Renaming a column Deleting a column You specify (or declare) such a transformation by using a procedure in the DBMS_STREAMS_ADM package. Oracle Streams performs declarative transformations internally, without invoking PL/SQL. These internal transformation functions enable you to modify the contents of an LCR in specific ways without writing your own PL/SQL functions. Declarative transformations can be added to rules for CAPTURE, PROPAGATION, or APPLY. Note Declarative rule-based transformations can transform only row LCRs. These row LCRs can be captured row LCRs or user-enqueued row LCRs (DML only). ADD_COLUMN transformations cannot add columns of the following data types: BLOB, CLOB, NCLOB, BFILE, LONG, LONG RAW, ROWID, and user-defined types (including object types, REFs, varrays, nested tables, and Oracle-supplied object types). Multiple declarative transformations can be configured on a single rule. In addition, a custom transformation can be configured on a rule that has declarative transformations. Oracle Database 11g: Implement Streams I - 351

352 Using Declarative Transformations
BEGIN DBMS_STREAMS_ADM.RENAME_SCHEMA( rule_name => 'STRMADMIN.HR51', from_schema_name => 'HR', to_schema_name => 'DEMO', step_number => 0); END; / HR DEMO Using Declarative Transformations You can specify multiple transformations for each rule. When there are multiple transformations specified for a single rule, the transformations are applied to the LCR in the following default order: 1. Delete column. 2. Rename column. 3. Add column. 4. Rename table. 5. Rename schema. The results of each transformation class are used in each subsequent step, as in these examples: If you delete column COMMENT and then try to add a column COMMENT, this succeeds because the Delete column transformation is executed before the Add column transformation. If you rename table HR.EMPLOYEES to DEMO.EMP and have a schema transformation from DEMO to TEST, then HR.EMPLOYEES is eventually renamed as TEST.EMP. If you rename table HR.EMPLOYEES to DEMO.EMP and the schema transformation is from HR to TEST, HR.EMPLOYEES becomes DEMO.EMP instead of TEST.EMP because, when the rename schema transformation is performed, it sees the DEMO.EMP table and not HR.EMP. Oracle Database 11g: Implement Streams I - 352

353 Oracle Database 11g: Implement Streams I - 353
Using Declarative Transformations (continued) Each declarative transformation is associated with a step number that determines its order of execution. Transformations are executed in the order of increasing step number. The default step number for each transformation is 0. Transformations with the same step number are executed in the default order listed on the preceding page. Declarative transformations give better performance than custom rule-based transformations. Advanced users can define their own order of execution by specifying a step number when declaring the transformation. For example, if you want to execute the rename schema transformations first and then use the default ordering for all other transformations, you could specify step number 1 for the rename schema transformation and step number 2 for all other transformations. Declarative transformations are performed before custom rule-based transformations. Oracle Database 11g: Implement Streams I - 353

354 Custom Rule-Based Transformations
Require a user-defined PL/SQL function to perform the transformation Can be configured for: Capture, apply, or propagation DML or DDL LCRs User-enqueued messages Can be used to perform one-to-many transformations on capture Can be used to implement transformations that are not available with declarative transformations Custom Rule-Based Transformations To specify a custom rule-based transformation, use the DBMS_STREAMS_ADM. SET_RULE_TRANSFORM_FUNCTION procedure. You can use a custom rule-based transformation to modify both captured and user-enqueued messages. Custom rule-based transformations require a user-defined PL/SQL function that takes as input an ANYDATA object containing a message and returns either an ANYDATA object containing the transformed message, or an array of zero or more ANYDATA encapsulations of a message. For example, a custom rule-based transformation can be used when the data type of a particular column in a table is different at two different databases. The column may be a NUMBER column in the source database and a VARCHAR2 column in the destination database. In this case, the transformation takes as input an ANYDATA object containing a row LCR with a NUMBER data type for a column and returns an ANYDATA object containing a row LCR with a VARCHAR2 data type for the same column. You may use custom rule-based transformations in the following cases: Splitting a column into several columns Combining several columns into one column Modifying the contents of a column (for example, to change dates) Oracle Database 11g: Implement Streams I - 354

355 Implementing Custom Rule-Based Transformations
The following steps outline the general procedure for using custom rule-based transformations: Create a PL/SQL function that performs the transformation. Grant the EXECUTE privilege on the function to the appropriate user. Create or locate the rules for which the transformation will be used. Set the custom rule-based transformation for each rule by running the SET_RULE_TRANSFORM_FUNCTION procedure. Implementing Custom Rule-Based Transformations The user who calls the transformation function must have the EXECUTE privilege on the function. This may be the Streams administrator, or it may be a capture or apply user. After the rules have been created, you can locate the system-created rules by querying the DBA_STREAMS_RULES view. You can use a query similar to: SELECT rule_name, rule_condition, subsetting_operation FROM DBA_STREAMS_RULES WHERE object_name='<table_name>'; You can also obtain this information by using the OUT parameters when you run the ADD_*_RULES procedures of the DBMS_STREAMS_ADM package. If you want to configure a custom rule-based transformation on subset rules, ensure that you associate the transformation function with each system-created rule (INSERT, UPDATE, and DELETE). As a final step, if you create the rule manually (not by using DBMS_STREAMS_ADM), you must assign the rule to a rule set. You must also assign that rule set to a capture or apply process, or to a propagation job for the transformation to be used. Oracle Database 11g: Implement Streams I - 355

356 Oracle Database 11g: Implement Streams I - 356
Modifying an LCR You can modify the values of an LCR when: Performing transformations Processing the LCR within an apply handler There are several member functions available for getting and setting the LCR values, such as: GET_ or SET_OBJECT_NAME GET_ or SET_OBJECT_OWNER GET_ or SET_SOURCE_DATABASE_NAME GET_ or SET_COMMAND_TYPE GET_ or SET_TAG GET_ or SET_COMMIT_SCN GET_ or SET_SOURCE_TIME GET_ or SET_EXTRA_ATTRIBUTE Modifying an LCR The following functions and procedures are common to both the LCR$_ROW_RECORD and LCR$_DDL_RECORD types, and are used when writing custom rule-based transformation functions: GET_COMMAND_TYPE: Returns the command type of the LCR GET_OBJECT_NAME: Returns the name of the object that is changed by the LCR GET_OBJECT_OWNER: Returns the owner of the object that is changed by the LCR GET_SCN: Returns the system change number (SCN) of the LCR GET_SOURCE_DATABASE_NAME: Returns the source database name GET_TAG: Returns the tag for the LCR GET_EXTRA_ATTRIBUTE: Returns the value for the specified extra attribute in the LCR GET_TRANSACTION_ID: Returns the transaction identifier of the LCR IS_NULL_TAG: Returns Y if the tag for the LCR is NULL, and returns N if the tag for the LCR is not NULL GET_COMPATIBLE: Returns the minimal database compatibility required to support the LCR SET_COMMAND_TYPE: Sets the command type SET_OBJECT_NAME: Sets the name of the object that is changed by the LCR SET_OBJECT_OWNER: Sets the owner of the object that is changed by the LCR Oracle Database 11g: Implement Streams I - 356

357 Oracle Database 11g: Implement Streams I - 357
Modifying an LCR (continued) SET_TAG: Sets the tag for the LCR SET_SOURCE_DATABASE_NAME: Sets the source database name of the object that is changed by the LCR SET_EXTRA_ATTRIBUTE: Sets the value for the specified extra attribute in the LCR LCR$_DDL_RECORD Subprograms EXECUTE: Executes the LCR under the security domain of the current user GET_BASE_TABLE_NAME: Returns the base (dependent) table name GET_BASE_TABLE_OWNER: Returns the base (dependent) table owner GET_CURRENT_SCHEMA: Returns the default schema (user) name GET_DDL_TEXT: Gets the data definition language (DDL) text in a CLOB GET_LOGON_USER: Returns the logon username GET_OBJECT_TYPE: Returns the type of the object that is involved for the DDL SET_BASE_TABLE_NAME: Sets the base (dependent) table name SET_BASE_TABLE_OWNER: Sets the base (dependent) table owner SET_CURRENT_SCHEMA: Sets the default schema (user) name SET_DDL_TEXT: Sets the DDL text SET_LOGON_USER: Sets the logon username SET_OBJECT_TYPE: Sets the object type LCR$_ROW_RECORD Subprograms ADD_COLUMN: Adds the value as old or new (depending on the value type that is specified) for the column CONVERT_LONG_TO_LOB_CHUNK: Converts LONG data in a row LCR into fixed width CLOB, or converts LONG RAW data in a row LCR into a BLOB DELETE_COLUMN: For the specified column, deletes the old value, the new value, or both depending on the value type that is specified GET_LOB_INFORMATION: Gets the large object (LOB) information for the column GET_LOB_OFFSET: Returns the LOB offset for the specified column GET_LOB_OPERATION_SIZE: Gets the operation size for the LOB column GET_LONG_INFORMATION: Gets the LONG information for the column GET_VALUE: Returns the old or new value for the specified column, depending on the value type that is specified GET_VALUES: Returns a list of old or new values, depending on the value type that is specified RENAME_COLUMN: Renames a column in an LCR SET_LOB_INFORMATION: Sets the LOB information for the column SET_LOB_OFFSET: Sets the LOB offset for the specified column SET_LOB_OPERATION_SIZE: Sets the operation size for the LOB column SET_VALUE: Overwrites the value of the specified column SET_VALUES: Replaces the existing old or new values for the LCR, depending on the value type that is specified Oracle Database 11g: Implement Streams I - 357

358 Modifying an LCR: Example
CREATE OR REPLACE FUNCTION zero_comm_data( evt IN SYS.AnyData) RETURN SYS.AnyData IS lcr SYS.LCR$_ROW_RECORD; obj_name VARCHAR2(30); rc NUMBER; BEGIN IF evt.GetTypeName='SYS.LCR$_ROW_RECORD' THEN rc := evt.getObject(lcr); obj_name := lcr.GET_OBJECT_NAME(); IF obj_name = 'EMPLOYEES' THEN lcr.SET_VALUE('NEW','COMMISSION_PCT', ANYDATA.ConvertNUMBER('0')); RETURN SYS.ANYDATA.ConvertObject(lcr); END IF; RETURN evt; END; Modifying an LCR: Example In the code fragment in the slide, a SYS.AnyData object that contains an LCR is passed into a transformation function. The procedure checks to ensure that: The object type is a row LCR by using the SYS.AnyData member function GETTYPENAME The object name within the LCR is EMPLOYEES by using the SYS.LCR$_ROW_RECORD member function GET_OBJECT_NAME If the message meets the conditions, the function sets the new value for COMMISSION_PCT to zero for all LCRs. The LCR is wrapped again in the SYS.AnyData type and returned to the caller. If the message is not a row LCR, or if it is an LCR for an object other than the EMPLOYEES table, the message is returned to the caller unchanged. Note: The apply process uses the OLD values in an LCR to identify the target row to which the change has to be applied at the destination site. If you want to transform the row LCRs to change all COMMISSION_PCT data to 0, you must modify the NEW column values for INSERTs or UPDATEs. The OLD value must not be changed because this most likely will cause a conflict. There are some situations in which you may want to change the OLD value on an UPDATE or DELETE—for example, if you want to force the OLD value to match the existing value at the target database. Oracle Database 11g: Implement Streams I - 358

359 Using LCR Extra Attributes
You can include test conditions in the transformation function for the extra attribute information of LCRs, if this information is captured: row_id session# and serial# thread# tx_name username retval SYS.AnyData; usernm VARCHAR2(35); ... retval := lcr.GET_EXTRA_ATTRIBUTE('username'); rc := retval.getVARCHAR2(usernm); IF usernm = 'APPUSER' THEN ... Using LCR Extra Attributes You can optionally use the GET_EXTRA_ATTRIBUTE procedure of the LCR type to test for conditions based on the extra attribute information in the LCRs. These attributes are: row_id: The original row ID in a DML row LCR. This attribute is not included in DDL LCRs or in row LCRs for index-organized tables. serial#: The serial number of the session that performed the change captured in the LCR session#: The identifier of the session that performed the change captured in the LCR thread#: The thread number of the instance in which the change captured in the LCR was performed. Typically, the thread number is relevant only in a Real Application Clusters (RAC) environment. tx_name: The name of the transaction that includes the LCR username: The name of the user who performed the change captured in the LCR Extra attributes are available only if the capture process at the source database has been configured to include the extra attribute information in the LCRs that it creates. Oracle Database 11g: Implement Streams I - 359

360 Creating a Custom Rule-Based Transformation
BEGIN DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION( rule_name => 'departments5', transform_function => 'hr.devuser_to_test'); END; / Creating a Custom Rule-Based Transformation Use the SET_RULE_TRANSFORM_FUNCTION procedure to associate the transformation function with a rule in a positive rule set as shown in the slide. The DEVUSER_TO_TEST transformation function is discussed later in this lesson. Ensure that the transformation function does not raise any exceptions. Exceptions may cause a capture process, propagation, or apply process to be disabled, and you would need to correct the transformation function before the capture process, propagation, or apply process can proceed. Refer to the Oracle Database PL/SQL User’s Guide and Reference for information about PL/SQL error handling and exceptions. Oracle Database 11g: Implement Streams I - 360

361 Creating a Custom Rule-Based Transformation
Create a PL/SQL function of the form: Use the SET_RULE_TRANSFORM_FUNCTION procedure to associate the transformation function with a rule in the positive rule set. FUNCTION function_name ( parameter_name IN SYS.AnyData) RETURN [ SYS.AnyData | SYS.STREAMS$_ANYDATA_ARRAY ]; Creating a Custom Rule-Based Transformation (continued) You define a transformation as a PL/SQL function that takes a SYS.AnyData object as input and returns a SYS.AnyData object, or an array of SYS.AnyData objects. You use the SET_RULE_TRANSFORM_FUNCTION procedure in the DBMS_STREAMS_ADM package to specify a rule-based transformation for a rule. This procedure modifies the rule’s action context to specify the transformation. The action context name is STREAMS$_TRANSFORM_FUNCTION; the value is the name of a user-created PL/SQL function that performs the transformation. When a rule in a positive rule set evaluates to TRUE for a message in a Streams environment, and when an action context that contains a name-value pair with the name STREAMS$_TRANSFORM_FUNCTION is returned, the PL/SQL function is run, taking the message as an input parameter. When a rule evaluates to FALSE for a message in a Streams environment, the rule is not returned to the client, and any PL/SQL function appearing in a name-value pair in the action context is not run. Different rules can use the same or different transformations. Oracle Database 11g: Implement Streams I - 361

362 Custom Transformation Function: Example
CREATE OR REPLACE FUNCTION hr.devuser_to_test (evt IN SYS.AnyData) RETURN SYS.AnyData IS lcr SYS.LCR$_ROW_RECORD; rc NUMBER; retval SYS.AnyData; usernm VARCHAR2(35); BEGIN IF evt.GETTYPENAME='SYS.LCR$_ROW_RECORD' THEN rc := evt.GETOBJECT(lcr); retval := lcr.GET_EXTRA_ATTRIBUTE('username'); rc := retval.getVARCHAR2(usernm); IF usernm = 'DEVUSER' THEN lcr.SET_OBJECT_OWNER('TEST'); END IF; RETURN SYS.ANYDATA.ConvertOBJECT(lcr); RETURN evt; END; Custom Transformation Function: Example The example in the slide is a PL/SQL function that accepts a SYS.AnyData type as input and checks whether the message is a row LCR that was generated by the DEVUSER user. If so, the LCR is modified to point to the same object in a different schema. In this example, no further changes are made. But, in some cases, you may also need to change column names and data types, and possibly convert the column data as well. Note: If you need to change only the owner for a table or other database object, you must use declarative transformations instead. In this example, the owner of the table is changed based on a specific condition, which is something you cannot do with declarative transformations. When creating custom rule-based transformations for DDL LCRs, you most likely would need to change not just the metadata (as shown in the example in the slide) but also the actual DDL text contained within the LCR. For example, if the custom rule-based transformation changes the name of a table, the table name in the DDL command must also be changed. DDL commands are stored in CLOB fields in the LCR$_DDL_RECORD object. Oracle Database 11g: Implement Streams I - 362

363 One-to-Many Transformation Function: Example
CREATE OR REPLACE FUNCTION hr.devuser_to_test2 (evt IN SYS.AnyData) RETURN STREAMS$_ANYDATA_ARRAY IS lcr SYS.LCR$_ROW_RECORD; rc NUMBER; retval STREAMS$_ANYDATA_ARRAY; usernm VARCHAR2(35); BEGIN RETURN retval; END; One-to-Many Transformation Function: Example A custom rule-based transformation function that can return multiple messages is a one-to-many transformation function. A one-to-many transformation function must have the following signature: FUNCTION user_function ( parameter_name IN ANYDATA) RETURN STREAMS$_ANYDATA_ARRAY; Oracle Database 11g: Implement Streams I - 363

364 Viewing Rule-Based Transformations
Transformation functions ALL_STREAMS_TRANSFORM_FUNCTION DBA_STREAMS_TRANSFORM_FUNCTION Declarative transformations DBA_STREAMS_TRANSFORMATIONS SELECT rule_owner||'.'||rule_name RULE, transform_function_name FROM DBA_STREAMS_TRANSFORM_FUNCTION; SELECT rule_owner||'.'||rule_name RULE, transform_type, from_schema_name, to_schema_name FROM DBA_STREAMS_TRANSFORMATIONS; Viewing Rule-Based Transformations The DBA_STREAMS_TRANSFORM_FUNCTION view displays all rules that use custom rule-based transformation functions. This includes system-generated rules and user-written rules. You can query this view for information, such as the rule owner, the name of the rule with a transformation function defined, and the name of the transformation function. This view provides an easy method of displaying all rule-based transformations defined in a database and the rules that they are associated with. Here is an example of the output you may see from the first query shown in the slide: RULE_OWNER RULE_NAME TRANSFORM_FUNCTION_NAME STRMADMIN DEPARTMENTS5 HR.DEVUSER_TO_TEST You can use the DBA_STREAMS_TRANSFORMATIONS view to manage all your rule-based transformations, including both declarative transformations and custom rule-based transformations. This view lists all transformations available on a system, in the order of execution. Oracle Database 11g: Implement Streams I - 364

365 Viewing Rule-Based Transformations
Viewing Rule-Based Transformations (continued) The DBA_STREAMS_TRANSFORMATIONS view displays information, such as: Rule information Transformation type Schema, table, or column transformation information User-defined transformation function name The order in which this transformation should be executed (STEP_NUMBER) The execution order relative to other declarative transformations on the same STEP_NUMBER Viewing Rule-Based Transformations Declarative transformations: DBA_STREAMS_TRANSFORMATIONS Custom rule-based transformations: DBA_STREAMS_TRANSFORM_FUNCTION ALL_STREAMS_TRANSFORM_FUNCTION SELECT rule_owner, rule_name, transform_function_name FROM DBA_STREAMS_TRANSFORM_FUNCTION; RULE_OWNER RULE_NAME TRANSFORM_FUNCTION_NAME STRMADMIN DEPARTMENTS5 HR.HR_TO_DEMO Oracle Database 11g: Implement Streams I - 365

366 Managing Custom Rule-Based Transformations
You can: Edit the transformation function Assign a different transformation function to a rule Remove a rule-based transformation If you edit the function itself, you need not modify the action context of the rules that use the transformation function. If you assign a different transformation function to a rule, use the SET_RULE_TRANSFORM_FUNCTION procedure to preserve the existing action contexts of the rules. Managing Custom Rule-Based Transformations To alter a rule-based transformation, you can either edit the transformation function or assign a different transformation function to a rule. If an action context contains name-value pairs in addition to the name-value pair that specifies the transformation, use caution when you modify the action context so that you do not change or remove any name-value pairs that are unrelated to the transformation. For example, subset rules use name-value pairs in an action context to perform internal transformations that convert UPDATE operations into INSERT and DELETE operations in certain situations (called a row migration). If you specify a new transformation or alter an existing transformation for a subset rule, use the SET_RULE_TRANSFORM_FUNCTION procedure of the DBMS_STREAMS_ADM package to ensure that you preserve the name-value pairs that perform row migrations. Oracle Database 11g: Implement Streams I - 366

367 Managing Custom Rule-Based Transformations
Changing the transformation function: Removing the transformation function for a rule: BEGIN DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION( rule_name =>'departments5', transform_function => 'hr.streams_mig_pkg.devuser_to_test'); END; / EXEC DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(- 'departments5', NULL); Managing Custom Rule-Based Transformations (continued) The SET_RULE_TRANSFORM_FUNCTION procedure modifies the specified rule’s action context. The information in an action context is an object of the SYS.RE$NV_LIST type, which consists of a list of name-value pairs. A rule-based transformation in Streams always consists of the following name-value pair in an action context: The name is STREAMS$_TRANSFORM_FUNCTION. The value is a SYS.AnyData instance containing a PL/SQL function name specified as a VARCHAR2. This is the function that performs the transformation. If the function is in a package, the package name must be specified. For example, the first command shown in the slide refers to the DEVUSER_TO_TEST function, which has been incorporated into the STREAMS_MIG_PKG package. If the schema for the transform function is not specified, the schema of the user who invokes the rule-based transformation function is used by default. If you specify NULL for transform_function, the procedure removes the current rule-based transformation from the rule. Note: This procedure does not verify that the specified transformation function exists. If the function does not exist, an error is raised when a Streams client tries to invoke the transformation function. Oracle Database 11g: Implement Streams I - 367

368 Oracle Database 11g: Implement Streams I - 368
Summary In this lesson, you should have learned how to: Describe a rule-based transformation Devise a plan to implement rule-based transformations in your Streams environment Specify declarative rule-based transformations Implement a custom rule-based transformation Manage rule-based transformations Oracle Database 11g: Implement Streams I - 368

369 Practice 12 Overview: Implementing a Transformation
This practice covers the following topics: Configuring apply for a shared table in a different schema Implementing a transformation on propagation to the destination Querying the data dictionary for information about transformations Oracle Database 11g: Implement Streams I - 369

370 Result of Practice 12: Implementing a Transformation
AMER database STRM01_CAPTURE OE EURO database DEMO. ORDERS Q_CAPTURE STRM01_PROPAGATION STRM01_APPLY Q_APPLY STREAMS_QUEUE STRM01_APPLY_DEMO OE.ORDERS SCN STRM01_PROP_DEMO Oracle Database 11g: Implement Streams I - 370

371 Apply Handlers

372 Oracle Database 11g: Implement Streams I - 372
Objectives After completing this lesson, you should be able to: Describe the purpose of an apply handler List the different types of apply handlers Describe the purpose of an error handler Create and use an apply handler Manage apply handlers for a database Oracle Database 11g: Implement Streams I - 372

373 Oracle Database 11g: Implement Streams I - 373
Apply An apply process dequeues LCRs and user messages from a specific queue and does either of the following: Handles each LCR individually even though it is part of the same transaction Passes the event as a parameter to a user-defined procedure Commits a transaction after the successful completion of all LCRs within the transaction Apply An apply process dequeues logical change records (LCRs) or user-enqueued events from a specific SYS.AnyData queue and either applies each LCR directly to a database object or passes the event as a parameter to a user-defined procedure, called an apply handler. The LCRs that are dequeued by an apply process contain data manipulation language (DML) changes or data definition language (DDL) changes that an apply process can apply to database objects in a destination database. Though each LCR is handled separately, it is part of a transaction. If apply is unable to successfully apply an LCR within a transaction, the entire transaction is rolled back and placed in the error queue. An apply process can apply either captured LCRs or user-enqueued LCRs, but not both. For captured LCRs, an apply process can apply captured LCRs from only one source database, because processing these LCRs requires knowledge of the dependencies, meaningful transaction ordering, and transactional boundaries at the source database. Also, each apply process can apply captured LCRs from only one capture process. If there are multiple capture processes running on a source database, and if LCRs from more than one of these capture processes are applied to a destination database, then there must be a separate apply process to apply changes from each capture process. A user-enqueued event that is dequeued by an apply process is of the SYS.AnyData type and can contain any data, including a user-created LCR. Oracle Database 11g: Implement Streams I - 373

374 Message Processing User-enqueued messages or non LCR Queue LCR Message
User message AP01 Message handler Apply changes. LCRs AP02 DML LCRs DML handler LCRs AP03 DML LCRs DML handler Precommit handler Message Processing Your options for message processing depend on whether the message that is received by an apply process is a user-enqueued message or not, and whether or not the message is an LCR. The diagram in the slide gives a summary of the different options. A separate apply process is required for each source database that populates the local queue. User-enqueued messages cannot be processed by the same apply process that processes captured messages. Note In the diagram in the slide, the apply processes represent any apply process, not a specific one. For example, the actions performed by AP02 and AP03 could both be performed by a single apply process: DML changes to the HR.EMPLOYEES table could be applied directly, but changes for the HR.DEPARTMENTS table would go to the DML defined for that table. Each apply process shows a possible configuration, not a required configuration. DDL LCRs DDL handler Oracle Database 11g: Implement Streams I - 374

375 Oracle Database 11g: Implement Streams I - 375
Apply Handlers User-written custom apply procedures Written in PL/SQL, Java, C, or C++ Java, C, and C++ procedures must be wrapped in PL/SQL before they can be used. Can be used for: Custom transformations Column subsetting or renaming Normalizing or denormalizing data Populating related fields or tables Implementing event processing logic that cannot be programmed using rules Apply Handlers You can configure an apply process to process a captured or user-enqueued event that contains an LCR in the following ways: directly apply the LCR event or pass the event as a parameter to a user procedure for processing. This provides the greatest amount of flexibility in processing the event. User-defined procedures used to implement an apply handler are written in PL/SQL, Java, C, or C++. Therefore, your apply procedure can perform any task that you specify through a programming language. A typical application of a user-defined procedure would be to reformat the data that is represented by the LCR before applying it to a local table (for example, field format, object name, and column name mapping transformations). A user-defined procedure could also be used to perform column subsetting, to normalize or denormalize data, or to update other objects that may not be present in the source database. Oracle Database 11g: Implement Streams I - 375

376 Apply Handlers for LCR Messages
Single DDL handler allowed Multiple DML handlers supported for each table and for each apply process Precommit handler works with a DML or message handler DML handler Execute LCR. DML handler LCR Precommit handler Target table Apply Handlers for LCR Events An apply process passes the LCR event as a parameter to a user procedure for processing. The user procedure can then process the LCR event in a customized way. A user procedure that processes row LCRs resulting from DML statements is called a DML handler. A user procedure that processes DDL LCRs resulting from DDL statements is called a DDL handler. A precommit handler is a user-defined PL/SQL procedure that can receive the commit information for a transaction and process the commit information in any customized way. A precommit handler may work with a DML handler or a message handler. A DML handler processes each row LCR that is dequeued by any apply process that contains a specific operation on a specific table. You can specify multiple DML handlers on the same table to handle different DML operations performed on the table. DML handlers can be specified for specific operations on a table. Applying the changes in the LCR to the database objects is optional when using an apply handler. When you execute a row LCR in an apply handler, the apply process applies the LCR without calling the apply handler again. DDL handler Oracle Database 11g: Implement Streams I - 376

377 Creating an Apply Handler Procedure
A DML, DDL, or message handler must take the following form: You can specify a separate DML handler for a shared table for each of the following operations: INSERT UPDATE DELETE LOB_UPDATE PROCEDURE user_procedure ( parameter_name IN SYS.AnyData); Creating an Apply Handler Procedure A DML, DDL, or message handler must have the following signature: PROCEDURE handler_procedure ( parameter_name IN SYS.AnyData); In this signature, handler_procedure represents the name of the procedure and parameter_name is an arbitrary name for the parameter passed to the procedure. The parameter that is passed to the procedure is a SYS.AnyData event, which can be an encapsulation of a DML or DDL LCR, or a user-enqueued message. You can have multiple DML handlers associated with an apply process. For example, you may have one DML handler to process INSERT and UPDATE operations on the HR.EMPLOYEES table and a different DML handler to process DELETE operations. You can have only one DDL or message handler associated with an apply process. Oracle Database 11g: Implement Streams I - 377

378 Oracle Database 11g: Implement Streams I - 378
DML Handler: Example CREATE OR REPLACE PROCEDURE conv_order_totals (lcr_anydata IN SYS.AnyData) IS val SYS.AnyData; ... BEGIN t := lcr_anydata.getObject(lcr); val := (lcr.get_value('new', 'ORDER_TOTAL')); t := val.getNumber(ordtotal); val := (lcr.get_value('new', 'CUSTOMER_ID')); t := val.getChar(custid); SELECT us_exchange_rate INTO curr_ex FROM oe.currency_exchanges ex, oe.customers c WHERE c.customer_id=custid AND c.country=ex.cid And c.Customer_id=val; ordtotal := ordtotal * curr_ex; lcr.set_value('new', 'ORDER_TOTAL', SYS.AnyData.ConvertNUMBER(ordtotal)); lcr.execute(true); END; DML Handler: Example The example in the slide shows a sample DML handler that alters the value of a column in the LCR. In the DML handler procedure, the following tasks are performed: The LCR event is passed in as an argument. The new value for the ORDER_TOTAL column is fetched from the event using the GET_VALUE member method function of the LCR and the GETNUMBER member method function of the SYS.AnyData type. The “new” value represents the value of ORDER_TOTAL after the DML change has been implemented, whereas “old” represents what the value of ORDER_TOTAL was before the DML change. The new value for the CUSTOMER_ID column is extracted from the event using the GET_VALUE member method function of the LCR and the GETNUMBER member method function of the SYS.AnyData type. This value may not have been updated if the DML action type is an UPDATE or DELETE operation. Additional code must be added to check the old column value if the new value for CUSTOMER_ID is NULL. The customer ID is used to locate the country of origin for that customer, and to get the current exchange rate for converting currency into US dollars. The order total is converted into US dollars and the new total saved in the LCR. The LCR is executed, applying the modified DML change to the target database object. TRUE specifies that the SET_UPDATE_CONFLICT_HANDLER procedure in DBMS_APPLY_ADM be used to resolve conflicts resulting from the execution of the LCR. Oracle Database 11g: Implement Streams I - 378

379 Oracle Database 11g: Implement Streams I - 379
DML Handler: Example (continued) DML handlers should handle any possible errors and retry the operation, if possible. If a DML handler cannot apply an LCR successfully, the entire transaction is rolled back and placed in the error queue. DML Handler: Example CREATE OR REPLACE PROCEDURE conv_order_totals (lcr_anydata IN SYS.AnyData) IS val SYS.AnyData; ... BEGIN t := lcr_anydata.getObject(lcr); val := (lcr.get_value('new', 'ORDER_TOTAL')); t := val.getNumber(ordtotal); val := (lcr.get_value('new', 'CUSTOMER_ID')); t := val.getChar(custid); SELECT us_exchange_rate INTO curr_ex FROM oe.currency_exchanges ex, oe.customers c WHERE c.customer_id=custid AND c.country=ex.cid; ordtotal := ordtotal * curr_ex; lcr.set_value('new', 'ORDER_TOTAL', SYS.AnyData.ConvertNUMBER(ordtotal)); lcr.execute(true); END; Oracle Database 11g: Implement Streams I - 379

380 Implementing a DML Handler
Specify a DML handler with the DBMS_APPLY_ADM.SET_DML_HANDLER procedure. BEGIN DBMS_APPLY_ADM.SET_DML_HANDLER( object_name => 'OE.ORDERS', object_type => 'TABLE', operation_name => 'INSERT', user_procedure => 'OE.CONV_ORDER_TOTALS', apply_name => 'APPLY_SITE1_LCRS' assemble_lobs => true); END; / Implementing a DML Handler After the DML handler procedure is created, you must associate it with a database table. You specify the type of operation for which the DML handler should be used. The possible values for the operation_name parameter are INSERT, UPDATE, DELETE, and LOB_UPDATE. You can have only one DML handler for each type of operation on the same table. If you want the DML handler to be used for more than one type of DML operation, you must call this procedure once for each type of operation. You may set a DML handler for a specific apply process, or you may set a DML handler to be a general DML handler that is used by all apply processes in the database: If the apply_name parameter is non-NULL, the DML handler or error handler is set for the specified apply process. The handler is not invoked for other apply processes at the local destination database. If the apply_name parameter is NULL (default value), the handler is set as a general handler for all apply processes at the destination database. When a handler is set for a specific apply process, this handler takes precedence over any general apply handlers. To unset a DML handler, use the SET_DML_HANDLER procedure and set the user_procedure parameter to NULL for a specific operation on a specific table. Note: DML handlers should handle any errors that may occur within the handler code. Oracle Database 11g: Implement Streams I - 380

381 Oracle Database 11g: Implement Streams I - 381
DDL Handler: Example CREATE OR REPLACE PROCEDURE audit_ddl_actions (evt IN SYS.ANYDATA) IS lcr SYS.LCR$_DDL_RECORD; rc PLS_INTEGER; ddl_text CLOB; BEGIN rc := evt.getObject(lcr); DBMS_LOB.CreateTemporary(ddl_text, TRUE); lcr.GET_DDL_TEXT(ddl_text); INSERT INTO strmadmin.ddl_history VALUES(SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(), lcr.GET_OBJECT_OWNER(), lcr.GET_OBJECT_NAME(), lcr.GET_OBJECT_TYPE(), ddl_text, lcr.GET_EXTRA_ATTRIBUTE('USERNAME')); lcr.EXECUTE(); DBMS_LOB.FreeTemporary(ddl_text); END; DDL Handler: Example The example in the slide shows a sample DDL handler that records the DDL change in a table. In addition to performing the DDL, the example handler does two things: Inserts a row into the DDL_HISTORY table Performs the actual DDL on the database  (lcr.execute) In the DDL handler procedure in the slide, the following tasks are performed: The LCR event is passed in as an argument. The DDL LCR is retrieved from the event by calling the GetObject member method procedure of the SYS.AnyData type. A temporary character large object (CLOB) is created to hold the actual DDL command. Information about the DDL command is retrieved from the DDL LCR by using various member method procedures of the LCR$_DDL_RECORD type. The LCR member function GET_EXTRA_ATTRIBUTE is called to get the name of the user who performed this DDL action wrapped in a SYS.AnyData type. The DDL audit information is inserted into the DDL_HISTORY table. After the DDL information is saved, the DDL LCR is executed and the temporary CLOB is freed. Oracle Database 11g: Implement Streams I - 381

382 Implementing a DDL Handler
Use the ddl_handler parameter in either of the following two procedures: DBMS_APPLY_ADM.CREATE_APPLY DBMS_APPLY_ADM.ALTER_APPLY BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name =>'APPLY_SITE1_LCRS', ddl_handler =>'STRMADMIN.AUDIT_DDL_ACTIONS'); END; / Implementing a DDL Handler After the DDL handler procedure is created, you associate it with an apply process. You can specify the DDL handler for an apply process by using the ddl_handler parameter in the ALTER_APPLY or CREATE_APPLY procedures in the DBMS_APPLY_ADM package. Only one DDL handler can be associated with an apply process. You remove the DDL handler for an apply process by setting the remove_ddl_handler parameter to TRUE in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. Oracle Database 11g: Implement Streams I - 382

383 Oracle Database 11g: Implement Streams I - 383
Precommit Handler Can be used with user-enqueued events or captured events Receives as input the commit SCN generated at the source database Enables you to perform actions at the end of a transaction, before it is committed Precommit Handler A commit LCR is an internal row LCR that contains a COMMIT directive. The precommit handler is triggered when an internal commit directive is encountered for captured events or when a transaction boundary is reached for user-enqueued events. For LCR events, a precommit handler enables you to perform actions before the events associated with that LCR are committed. For user-enqueued events, a precommit handler enables you to perform actions after all the user-enqueued messages in a transaction have been dequeued. For example, if a shipping department enqueues an event for each item of an order that was recently shipped, you can use the apply process with a message handler to update the customer order tables. You can then use a precommit handler to determine which items were not shipped and generate a communication for the customer regarding the status of the order. If a precommit handler raises an exception, the entire transaction is rolled back at the apply site, and all the events in the transaction are moved to the error queue. Precommit handler Commit SCN Oracle Database 11g: Implement Streams I - 383

384 Precommit Handler: Example
CREATE OR REPLACE PROCEDURE strmadmin.audit_commit (cscn IN NUMBER) IS BEGIN -- Insert commit information into dml_history INSERT INTO strmadmin.dml_history (timestamp, commit_scn) VALUES (SYSDATE, cscn); END; / Precommit Handler: Example In the example in the slide, the precommit handler is used to record the commit information for the DML audit records in the DML_HISTORY table. Another example would be in an inventory application; check whether inventory is available before sending an order confirmation . A DML handler is used in conjunction with this precommit handler. A precommit handler receives the commit SCN in the queue of an apply process before the transaction is completely processed by the apply process. For example, the DML handler procedure could log details about each DML change that makes up the transaction into the DML_HISTORY table. The precommit handler then inserts a new row with the current time and the source database commit SCN for the transaction. You could also use a precommit handler with user-enqueued events. A user or application may enqueue messages into a queue and then issue a COMMIT statement to end the transaction. The enqueued messages are organized into a message group. When an apply process is configured to process user-enqueued messages, it generates a single transaction identifier and commit SCN for all the messages in a message group. Transaction identifiers and commit SCN values generated by an individual apply process have no relation to the source transaction, nor to the values generated by any other apply process. A precommit handler configured for such an apply process receives the commit SCN supplied by the apply process. Oracle Database 11g: Implement Streams I - 384

385 Implementing a Precommit Handler
Use the precommit_handler parameter in either of the following two procedures: DBMS_APPLY_ADM.CREATE_APPLY DBMS_APPLY_ADM.ALTER_APPLY BEGIN DBMS_APPLY_ADM.ALTER_APPLY ( apply_name => 'apply_site1_lcrs', precommit_handler=>'strmadmin.audit_commit'); END; / Implementing a Precommit Handler After the precommit handler procedure is created, you associate it with an apply process by using the precommit_handler parameter in the ALTER_APPLY or CREATE_APPLY procedure of DBMS_APPLY_ADM. The procedure specified in the precommit_handler parameter must have the following signature: PROCEDURE handler_procedure ( parameter_name IN NUMBER); In this signature, handler_procedure represents the name of the procedure and parameter_name represents the name of the parameter passed to the procedure. The parameter passed to the procedure is an SCN from an internal commit directive in the queue that is used by the apply process. The precommit handler procedure must conform to the following restrictions: Any work that commits must be an autonomous transaction. Any rollback must be to a named savepoint created in the procedure. You can have only one precommit handler for an apply process. You remove the precommit handler for an apply process by setting the remove_precommit_handler parameter to TRUE in the ALTER_APPLY procedure. Oracle Database 11g: Implement Streams I - 385

386 Oracle Database 11g: Implement Streams I - 386
Implementing a Precommit Handler (continued) Note: Autonomous transactions are independent transactions that can be called from within another transaction. An autonomous transaction enables you to leave the context of the calling transaction, perform some SQL operations, commit or undo those operations, and then return to the calling transaction’s context and continue with that transaction. After it is invoked, an autonomous transaction is totally independent of the main transaction that called it. It does not see any of the uncommitted changes made by the main transaction and does not share any locks or resources with the main transaction. Changes made by an autonomous transaction become visible to other transactions upon commit of the autonomous transactions. Autonomous transactions are useful for implementing actions, such as transaction logging and retry counters, which need to be performed independently, regardless of whether the calling transaction commits or rolls back. You can call autonomous transactions from within a PL/SQL block. Use pragma AUTONOMOUS_TRANSACTION. A pragma is a compiler directive. You can declare the following kinds of PL/SQL blocks to be autonomous: Stored procedure or function Local procedure or function Package Type method Top-level anonymous block Oracle Database 11g: Implement Streams I - 386

387 Oracle Database 11g: Implement Streams I - 387
Error Handler User-written custom apply procedure Written in PL/SQL, Java, C, or C++ Java, C, and C++ procedures must be wrapped in PL/SQL before they can be used. Associated with a specific DML operation on a table Executed when a row LCR raises an apply process error One per table Error Handler An error handler is a user-defined procedure that handles errors resulting from a row LCR dequeued by any apply process that contains a specific operation on a specific table. You can specify multiple error handlers on the same table to handle errors that result from different DML operations on the table. You can set an error handler for a specific apply process, or you can set an error handler as a general error handler that is used by all apply processes that apply the specified operation to the specified table. You can use the error handler to resolve possible error conditions, or you can simply log the error and notify the database administrators of the error. Running an error handler results in one of the following outcomes: The error handler successfully resolves the error and returns control to the apply process. The error handler fails to resolve the error, and the error is raised. The raised error causes the transaction to be rolled back and placed in the error queue. If you want to retry the DML operation, use the error handler procedure to run the EXECUTE member procedure for the LCR. Note: You cannot specify an error handler for a specific table if that table is configured to use a DML handler. A DML handler should handle any possible errors within the handler code. Oracle Database 11g: Implement Streams I - 387

388 Default Apply with Error Handler
Attempt to apply LCR. Check for conflicts or apply errors. If errors occur, call a user-specified error handling procedure. After resolving the error, apply the change. For unresolved errors, all LCRs associated with the transaction are placed into the error queue. The entire transaction is rolled back and placed in the error queue. APnn LCR Apply changes. Conflict handler Default Apply with Error Handler Assume that an apply process applies the event without using DML handlers. The apply process either successfully applies the change in the LCR to a database object or, if a conflict or apply error is encountered, tries to resolve the error with a specified error handler, if one is defined: If the user-specified error handler can resolve the error, it applies the original or modified LCR (if specified). If the error handler cannot resolve the error, the apply process places the transaction and all LCRs associated with the transaction into the error queue. Note: A conflict handler can be used to resolve update conflicts. This concept is covered in the lesson titled “Conflict Resolution.” In the following example, the steps that trigger the error are as follows: At the destination database, a user inserts a row with a region_id value of 6. At the source database, a user inserts a row into the HR.REGIONS table with a region_id value of 6. A capture process at the source database captures the change described in step 2. A propagation propagates the LCR containing the change from a queue at the source database to the queue used by the apply process at the destination database. When the apply process tries to apply the LCR, an error results because of a primary key violation. The apply process invokes the error handler to handle the error. Error handler Oracle Database 11g: Implement Streams I - 388

389 Example of Error Handler
CREATE OR REPLACE PROCEDURE regions_pk_error ( message IN ANYDATA, … ) IS reg_id NUMBER; BEGIN IF error_numbers(1) IN ( 1 , 2290 ) THEN ad := DBMS_STREAMS.GET_INFORMATION('CONSTRAINT_NAME'); ret := ad.GetVarchar2(errlog_rec.text); ELSE errlog_rec.text := NULL ; END IF ; ad := DBMS_STREAMS.GET_INFORMATION('SENDER'); ret := ad.GETVARCHAR2(errlog_rec.sender); apply_name := DBMS_STREAMS.GET_STREAMS_NAME(); ret := message.GETOBJECT(lcr); errlog_rec.object_name := lcr.GET_OBJECT_NAME() ; errlog_rec.command_type := lcr.GET_COMMAND_TYPE() ; errlog_rec.errnum := error_numbers(1) ; errlog_rec.errmsg := error_messages(1) ; END regions_pk_error; Example of Error Handler In the example shown: 1. The error handler logs the error in the STRMADMIN.ERRORLOG table. 2. The error handler modifies the region_id value in the LCR using a sequence and executes the LCR to apply it. The complete example is as follows: Complete the following steps to create the regions_pk_error error handler: 1. Create the sequence used by the error handler to assign new primary key values by connecting as the hr user and running the following statement: CONNECT hr/hr CREATE SEQUENCE hr.reg_exception_s START WITH 9000; 2. This example assumes that users at the destination database will never insert a row into the HR.REGIONS table with a region_id greater than 8999. 3. Grant the Streams administrator the ALL privilege on the sequence: GRANT ALL ON reg_exception_s TO strmadmin; 4. Create the ERRORLOG table by connecting as the Streams administrator and running the following statement: CONNECT strmadmin/streams Oracle Database 11g: Implement Streams I - 389

390 Oracle Database 11g: Implement Streams I - 390
Example of Error Handler (continued) CREATE TABLE strmadmin.errorlog( logdate DATE, apply_name VARCHAR2(30), sender VARCHAR2(100), object_name VARCHAR2(32), command_type VARCHAR2(30), errnum NUMBER, errmsg VARCHAR2(2000), text VARCHAR2(2000), lcr SYS.LCR$_ROW_RECORD); 5. Create a package that includes the regions_pk_error procedure: CREATE OR REPLACE PACKAGE errors_pkg AS TYPE emsg_array IS TABLE OF VARCHAR2(2000) INDEX BY BINARY_INTEGER; PROCEDURE regions_pk_error( message IN ANYDATA, error_stack_depth IN NUMBER, error_numbers IN DBMS_UTILITY.NUMBER_ARRAY, error_messages IN EMSG_ARRAY); END errors_pkg ; / 6. Create the package body: CREATE OR REPLACE PACKAGE BODY errors_pkg AS PROCEDURE regions_pk_error ( message IN ANYDATA, error_stack_depth IN NUMBER, error_numbers IN DBMS_UTILITY.NUMBER_ARRAY, error_messages IN EMSG_ARRAY ) IS reg_id NUMBER; ad ANYDATA; lcr SYS.LCR$_ROW_RECORD; ret PLS_INTEGER; vc VARCHAR2(30); apply_name VARCHAR2(30); errlog_rec errorlog%ROWTYPE ; ov SYS.LCR$_ROW_LIST; BEGIN -- Access the error number from the top of the stack. -- In case of check constraint violation, -- get the name of the constraint violated. IF error_numbers(1) IN ( 1 , 2290 ) THEN ad := DBMS_STREAMS.GET_INFORMATION('CONSTRAINT_NAME'); ret := ad.GetVarchar2(errlog_rec.text); ELSE errlog_rec.text := NULL ; END IF ; Oracle Database 11g: Implement Streams I - 390

391 Oracle Database 11g: Implement Streams I - 391
Example of Error Handler (continued) -- Get the name of the sender and the name of the apply process. ad := DBMS_STREAMS.GET_INFORMATION('SENDER'); ret := ad.GETVARCHAR2(errlog_rec.sender); apply_name := DBMS_STREAMS.GET_STREAMS_NAME(); -- Try to access the LCR. ret := message.GETOBJECT(lcr); errlog_rec.object_name := lcr.GET_OBJECT_NAME() ; errlog_rec.command_type := lcr.GET_COMMAND_TYPE() ; errlog_rec.errnum := error_numbers(1) ; errlog_rec.errmsg := error_messages(1) ; INSERT INTO strmadmin.errorlog VALUES (SYSDATE, apply_name, errlog_rec.sender, errlog_rec.object_name, errlog_rec.command_type, errlog_rec.errnum, errlog_rec.errmsg, errlog_rec.text, lcr); -- Add the logic to change the contents of LCR with correct values. -- In this example, get a new region_id number -- from the hr.reg_exception_s sequence. ov2 := lcr.GET_VALUES('new', 'n'); FOR i IN 1 .. ov2.count LOOP IF ov2(i).column_name = 'REGION_ID' THEN SELECT hr.reg_exception_s.NEXTVAL INTO reg_id FROM DUAL; ov2(i).data := ANYDATA.ConvertNumber(reg_id) ; END IF ; END LOOP ; -- Set the NEW values in the LCR. lcr.SET_VALUES(value_type => 'NEW', value_list => ov2); -- Execute the modified LCR to apply it. lcr.EXECUTE(true); END regions_pk_error; END errors_pkg; / Oracle Database 11g: Implement Streams I - 391

392 Implementing an Error Handler
Specify an error handler by setting the user_procedure and error_handler parameters of the SET_DML_HANDLER procedure in DBMS_APPLY_ADM: For a specific database object For a specific DML operation An error handler must be of the following form: PROCEDURE user_procedure ( evt IN SYS.AnyData, error_stack_depth IN NUMBER, error_numbers IN DBMS_UTILITY.NUMBER_ARRAY, error_messages IN emsg_array); Implementing an Error Handler You create an error handler by running the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package and setting the error_handler parameter to TRUE. The user_procedure parameter must be set to the name of the error handler. In the syntax shown in the slide, user_procedure stands for the name of the procedure. Each parameter is required and must have the specified data type. However, you can change the names of the parameters. The emsg_array parameter must be a user-defined array that is a PL/SQL table of the VARCHAR2 type with at least 76 characters. For example, suppose that a check constraint is violated on a table. You could use an error handler to modify the LCR to set the column value to an acceptable level, and then log information about the origins of the transaction and the unacceptable value. To unset an error handler, use the SET_DML_HANDLER procedure and set the user_procedure parameter to NULL for a specific operation on a specific table. The error_handler parameter does not need to be specified. Oracle Database 11g: Implement Streams I - 392

393 Restrictions for Apply Handler Procedures
Do not execute COMMIT or ROLLBACK in the apply handler procedure. If you are modifying the LCR and then using the EXECUTE member procedure in the apply handler procedure: Do not manipulate more than one row in a row operation For UPDATE or DELETE, include the entire key of the target table in the list of old values For INSERT, include the entire key of the target table in the list of new values Do not modify LONG, LONG RAW, or LOB column data in an LCR Restrictions for Apply Handler Procedures The following restrictions apply to the user procedure: Do not execute COMMIT or ROLLBACK statements. A transaction may consist of multiple LCRs. Performing a commit in an apply handler may endanger the consistency of the transaction that contains the LCR. When the transaction has been processed by the apply process, a commit is performed. If you are modifying a row by using the EXECUTE member procedure for the row LCR, do not modify more than one row in a row operation. You must construct and manually execute any DML statements that manipulate more than one row. If the command type is UPDATE or DELETE, row operations resubmitted by using the EXECUTE member procedure for the LCR must include the entire key in the list of old values. If the command type is INSERT, row operations resubmitted by using the EXECUTE member procedure for the LCR should include the entire key in the list of new values. Otherwise, duplicate rows are possible. Do not modify LONG, LONG RAW, or LOB column data in an LCR. This includes DML handlers, error handlers, and rule-based transformation functions. The key for a table is the primary key, unless a substitute key has been specified by the SET_KEY_COLUMNS procedure. Oracle Database 11g: Implement Streams I - 393

394 Customizing Apply Handler Actions
Utilities in the DBMS_STREAMS package enable you to further customize the actions taken by apply handlers, rules, or transformations: GET_INFORMATION GET_STREAMS_NAME GET_STREAMS_TYPE Member functions of the LCR record types also provide information for customizing actions: GET_COMMAND_TYPE GET_COMMIT_SCN GET_COMPATIBLE Customizing Apply Handler Actions The actions performed in an apply handler can be customized further for your environment by using information external to the LCR or user-enqueued event. Here are some examples: GET_INFORMATION: When used in an apply handler with SENDER specified as the type of information, this function returns the name of the sender for the current LCR (from its AQ message properties). If used within a DML or error handler with CONSTRAINT_NAME specified as the type of information, it returns the name of the constraint that was violated for an LCR that raised an error. GET_STREAMS_NAME: This function gets the Streams name of the process that invoked the rule or procedure, if the type of the Streams invoker is CAPTURE, APPLY, or ERROR_EXECUTION. You can use this function in rule conditions, rule-based transformations, apply handlers, and error handlers. For example, when using multiple apply processes to apply changes from different source databases to the same set of tables, you may decide to configure each apply process to use the same error handler procedure. You can use the GET_STREAMS_NAME function to determine which apply process raised the error. Oracle Database 11g: Implement Streams I - 394

395 Oracle Database 11g: Implement Streams I - 395
Customizing Apply Handler Actions (continued) GET_STREAMS_TYPE: This function gets the type of Streams invoking process. The values returned are CAPTURE, APPLY, ERROR_HANDLER, or NULL. You can use this function in rule conditions, rule-based transformations, apply handlers, and error handlers. For example, you can use the GET_STREAMS_TYPE function to instruct a DML handler to operate differently if it is processing events from the error queue (ERROR_EXECUTION type) instead of the apply process queue (APPLY type). The following functions are common to both the LCR$_DDL_RECORD and LCR$_ROW_RECORD types: GET_COMMAND_TYPE: This function returns the command type of the LCR. The values returned are the same as those returned by the OCI statement handler attribute OCI_ATTR_SQLFNCODE. The SQL Command Codes table in the Oracle Call Interface Programmer’s Guide has a complete list of the command types. GET_COMMIT_SCN: This function returns the commit SCN of the transaction to which the current LCR belongs. The commit SCN for a transaction is available only during apply or during error transaction execution. This function can be used only in a DML handler, DDL handler, or error handler. Such a handler may use the SCN obtained by this procedure to flash back to the transaction commit time for an LCR. In this case, the flashback must be performed at the source database for the LCR. The commit SCN may not be available for an LCR that is part of an incomplete transaction. For example, user-enqueued LCRs may not have a commit SCN. If the commit SCN is not available for an LCR, this function returns NULL. GET_COMPATIBLE: This function returns the database compatibility that is required to support the LCR. You can use this function in a rule condition or in an apply handler to perform special actions for an unsupported LCR. Note: You can determine which database objects in a database are not supported by Streams by querying the DBA_STREAMS_UNSUPPORTED data dictionary view. Oracle Database 11g: Implement Streams I - 395

396 Oracle Database 11g: Implement Streams I - 396
LOB Assembly INSERT LOB_UPDATE INSERT LOB_UPDATE LOB locator LOB_TRIM LCRs LOB Assembly A change to a row in a table that does not include any LOB columns results in a single row LCR, but a change to a row that includes one or more LOB columns may result in multiple row LCRs. An apply process that does not send row LCRs that contain LOB columns to an apply handler can apply these row LCRs directly. However, before Oracle Database 10g Release 2, custom processing of row LCRs that contain LOB columns required that apply handlers be configured to process multiple LCRs for a single row change. LOB assembly simplifies custom processing of captured row LCRs with LOB columns. LOB assembly automatically combines multiple captured row LCRs resulting from a change to a row with LOB columns into one row LCR. An apply process passes this single row LCR to a DML handler or error handler when LOB assembly is enabled. Also, after LOB assembly, the LOB column values are represented by LOB locators, not by VARCHAR2 or RAW data type values. Note: To use a DML or error handler to process assembled LOBs at multiple destination databases, LOB assembly must assemble the LOBs separately on each destination database. UPDATE LCRs Oracle Database 11g: Implement Streams I - 396

397 Implementing LOB Assembly
DBMS_APPLY_ADM.SET_DML_HANDLER( object_name => 'PM.AD_DATA', object_type => 'TABLE', operation_name => 'UPDATE', error_handler => false, user_procedure => 'PM.STAMP_AD_IMAGES', assemble_lobs => TRUE); Implementing LOB Assembly To enable LOB assembly for a DML or error handler, set the assemble_lobs parameter to TRUE in the DBMS_APPLY_ADM.SET_DML_HANDLER procedure. If the assemble_lobs parameter is set to FALSE for a DML or error handler, LOB assembly is disabled and multiple row LCRs are passed to the handler for a change to a single row with LOB columns. When LOB assembly is enabled, a DML or error handler may modify LOB columns in a row LCR. Within the apply handler procedure, the preferred way to perform operations on a LOB is to use a subprogram in the DBMS_LOB package. If a row LCR contains a LOB column that is NULL, a new LOB locator must replace the NULL value. If a row LCR will be applied with the EXECUTE member procedure, use the ADD_COLUMN and SET_VALUE member procedures for row LCRs to make changes to a LOB. When LOB assembly is enabled, LOB assembly converts non-NULL LOB columns in user-enqueued LCRs into LOB locators. However, LOB assembly does not combine multiple user-enqueued row LCRs into a single row LCR—for example, for user-enqueued row LCRs, LOB assembly does not combine multiple LOB WRITE row LCRs following an INSERT row LCR into a single INSERT row LCR. LOB assembly LOB locator AD_DATA Oracle Database 11g: Implement Streams I - 397

398 Oracle Database 11g: Implement Streams I - 398
Implementing LOB Assembly (continued) Refer to the Oracle Streams Replication Administrator’s Guide for an example of a LOB assembly procedure. Note LOB assembly cannot assemble row LCRs from a table containing any LONG or LONG RAW columns. The compatibility level of the database running an apply handler must be or higher to specify LOB assembly for the apply handler. Row LCRs captured on a database running with a compatibility level lower than cannot be assembled by LOB assembly. Also, row LCRs captured on a database running an Oracle Database release before Oracle Database 10g Release 2 cannot be assembled by LOB assembly. If custom rule-based transformations were performed on row LCRs that contain LOB columns during capture, propagation, or apply, then an apply handler operates on the transformed row LCRs. If there are LONG or LONG RAW columns at a source database, and a custom rule-based transformation uses the CONVERT_LONG_TO_LOB_CHUNK member function for row LCRs to convert them to LOBs, LOB assembly may be enabled for apply handlers that operate on these row LCRs. Oracle Database 11g: Implement Streams I - 398

399 Managing Apply Handlers
By using the DBMS_APPLY_ADM.ALTER_APPLY procedure, you can: Specify or remove a message or DDL handler for an apply process Specify or remove a precommit handler for an apply process With the DBMS_APPLY_ADM.SET_DML_HANDLER procedure, you can: Specify or remove a DML handler for a shared database object Specify or remove an error handler for a shared database object Managing Apply Handlers By using the DBMS_APPLY_ADM package, you can manage apply handlers as follows: Specify or remove a user-defined procedure (message handler) that processes non-LCR messages in the queue for the apply process. Specify or remove a user-defined procedure (DDL handler) that processes DDL LCRs in the queue for the apply process. Set or remove a DML handler for a specific operation on a table. Set or remove an error handler for a specific operation on a table. You can also set the DDL, message, or precommit handler by using the CREATE_APPLY procedure of DBMS_APPLY_ADM. Note You cannot have a DML handler and an error handler simultaneously for the same operation on the same table. You can have only one DDL, message, or precommit handler for each apply process. Oracle Database 11g: Implement Streams I - 399

400 Viewing Apply Handler Information
ALL_APPLY: DDL handlers Message handlers Precommit handlers ALL_APPLY_DML_HANDLERS: DML handlers or error handlers SELECT object_name "TABLE", operation_name, user_procedure, apply_name APPLY FROM DBA_APPLY_DML_HANDLERS; TABLE OPERATION USER_PROCEDURE APPLY ORDERS INSERT "OE"."CONV_ORDER_TOTALS" Viewing Apply Handler Information The ALL_APPLY and DBA_APPLY views contain information about DDL, precommit, and message handlers for each apply process. You can determine whether a DML handler or error handler has been set for a specific apply process or as a general apply handler by querying the DBA_APPLY_DML_HANDLERS or ALL_APPLY_DML_HANDLERS dictionary views. These views also contain information about the DML operations for which the apply handlers are invoked. If the error_handler column contains a value of TRUE, the user procedure is an error handler. If the value of this column is FALSE, the user procedure is a DML handler. In the sample output shown in the slide, the OE.CONV_ORDER_TOTALS procedure is configured as a general apply handler procedure for all INSERT operations on the ORDERS table. Oracle Database 11g: Implement Streams I - 400

401 Oracle Database 11g: Implement Streams I - 401
Summary In this lesson, you should have learned how to: List the different types of apply handlers Describe the purpose of an error handler Create and use an apply handler Manage apply handlers for a database Oracle Database 11g: Implement Streams I - 401

402 Practice 13 Overview: Configuring an Apply Handler
This practice covers the following topics: Creating a PL/SQL routine to be implemented as an apply handler Altering an apply process to specify the apply handler Querying the data dictionary views for apply handler information AMER database STRM01_CAPTURE OE EURO database OE.ORDER_AUDIT Q_CAPTURE STRM01_ PROPAGATION STRM01_APPLY Q_APPLY OE.ORDERS Supplemental group: ORDER_ID,ORDER_STATUS OE.LOG_PAID_ORDERS procedure Oracle Database 11g: Implement Streams I - 402

403 Administering a Streams Environment

404 Oracle Database 11g: Implement Streams I - 404
Objectives After completing this lesson, you should be able to: Split and merge Streams Alter a Streams environment Administer SCNs and the Streams data dictionary Oracle Database 11g: Implement Streams I - 404

405 Splitting and Merging of Streams
Split Stream when the destination is unavailable for an extended time:   Simplifies management of hub-and-spoke environments Prevents unnecessary spill to disk Allows fast “catch-up” of unavailable database Merges Stream to return to original configuration Splitting and Merging of Streams Occasionally, a destination database in a hub-and-spoke replication environment may be unavailable for an extended time. For example, the database may be shut down for hardware or building maintenance for several hours. During this time frame, changes for that destination may spill from memory and be written to disk. The SPLIT_STREAMS procedure allows the database administrator (DBA) to avoid this additional disk processing at the source database by splitting the disabled stream that is flowing from the capture process from all other streams. When the database becomes available again, the split stream is started so that it can catch up to the current time quickly. After the database is caught up to all other spoke databases, it can be merged back into the original configuration by using the merge_streams procedure. The DBMS_STREAMS_ADM package contains the SPLIT_STREAMS and MERGE_STREAMS procedures. Oracle Database 11g: Implement Streams I - 405

406 Streams: Hub with Three Spokes
Destination database A Dequeue LCRs Propagation A Queue Apply process Source database Destination database B Enqueue LCRs Dequeue LCRs Queue Propagation B Queue Capture process Destination database C Apply process Dequeue LCRs Propagation C Queue Streams: Hub with Three Spokes The graphic shows a streams replication implementation where changes captured in a source database S are propagated to three destination databases A, B, and C. The changes are then applied in the destination databases by apply processes in each database. Apply process Oracle Database 11g: Implement Streams I - 406

407 Split Streams: Site A Unavailable
Destination database A Source database Cloned propagation A Dequeue LCRs Queue Enqueue LCRs Queue Apply process Destination database B Cloned Capture process (disabled) Dequeue LCRs Enqueue LCRs Propagation B Queue Queue Destination database C Apply process Capture process Dequeue LCRs Propagation C Queue Split Streams: Site A Unavailable The SPLIT_STREAMS procedure splits the stream by creating a “clone” of the queue, the capture process, and the propagation for that disabled site. The original propagation to that database is dropped, removing any messages for that database from the memory queue. The original capture process is stopped briefly during the split operation. After the split is complete, the original capture is restarted automatically.  If the destination database A is down for an extended period, messages intended for destination database A build up in the queue at the capture database. The illustration shows the configuration resulting from the SPLIT_STREAMS procedure. The capture database contains a cloned capture process and a cloned queue. The cloned capture process is disabled. The cloned propagation A at the capture database is configured to propagate LCRs to a queue at the destination database A when destination database A is up and the cloned capture process is enabled. The original propagation to destination A is dropped. Any outstanding messages in the queue for destination A are removed. These messages are regenerated when the cloned capture process is started after destination A becomes available. Apply process Oracle Database 11g: Implement Streams I - 407

408 Oracle Database 11g: Implement Streams I - 408
Split Streams: Site A Unavailable (continued) The setup is as follows: Destination database A contains a queue that accepts LCRs from propagation A. This database also contains an apply process that dequeues LCRs from the queue. Destination database B contains a queue that accepts LCRs from propagation B. This database also contains an apply process that dequeues LCRs from the queue. Destination database A is down, and the cloned capture process at the source database is disabled. This cloned capture process must be started manually by the DBA when database A becomes available. The figure shows a Streams replication environment with a problem destination. Destination database A is down, and the SPLIT_STREAMS procedure has created the cloned capture and propagation. Oracle Database 11g: Implement Streams I - 408

409 SPLIT_STREAMS Procedure
Destination database A Source database Cloned propagation A Dequeue LCRs Queue Enqueue LCRs Queue Apply process Destination database B Cloned Capture process (disabled) Dequeue LCRs Enqueue LCRs Propagation B Queue Queue Destination database C Apply process Capture process Dequeue LCRs Propagation C Queue SPLIT_STREAMS Procedure Specifically, the procedure performs the following actions: 1. Creates a new queue at the database running the capture process. The new queue is called the cloned queue because it is a clone of the queue used by the original stream. The new queue is used by the new, cloned capture process, and it is the source queue for the new, cloned propagation. 2. Creates a new propagation that propagates messages from the source queue created in step 1 to the existing destination queue. The new propagation is called the cloned propagation because it is a clone of the propagation used by the original stream. The cloned propagation uses the same rule set as the original propagation. 3. Stops the capture process 4. Queries the acknowledged system change number (SCN) for the original propagation. The acknowledged SCN is the last SCN acknowledged by the apply process that applies the changes sent by the propagation. 5. Creates a new capture process. The new capture process is called the cloned capture process because it is a clone of the capture process used by the original stream. The procedure sets the start SCN for the cloned capture process to the value of the acknowledged SCN queried in step 4. The cloned capture process uses the same rule set as the original capture process. The state of the cloned capture is DISABLED. Apply process Oracle Database 11g: Implement Streams I - 409

410 Oracle Database 11g: Implement Streams I - 410
SPLIT_STREAMS Procedure (continued) 6. Drops the original propagation 7. Starts the original capture process with the start SCN set to the acknowledged SCN queried in step 4 The figure shows the cloned stream created by the SPLIT_STREAMS procedure. Oracle Database 11g: Implement Streams I - 410

411 Split Streams: Site A Available
Destination database A Source database Cloned propagation A Dequeue LCRs Queue Enqueue LCRs Queue Apply process Destination database B Cloned Capture process (Enabled) Dequeue LCRs Enqueue LCRs Propagation B Queue Queue Destination database C Apply process Capture process Dequeue LCRs Propagation C Queue Split Streams: Site A Available When destination database A becomes available, the DBA starts the cloned capture at the source database to begin the “catch-up” operation. The figure shows a destination database A, which is running, and a cloned capture process, which is enabled at the capture database. The cloned stream begins to flow and starts to catch up to the original streams. The original capture and propagations to B and C continue without interruption. Apply process Oracle Database 11g: Implement Streams I - 411

412 Merge Streams: Original Configuration
Destination database A Dequeue LCRs Propagation A Queue Apply process Source database Destination database B Enqueue LCRs Dequeue LCRs Queue Propagation B Queue Capture process Destination database C Apply process Dequeue LCRs Propagation C Queue Merge Streams: Original Configuration MERGE_STREAMS can be performed both manually and automatically. This illustration shows a Streams replication environment with a capture database and three destination databases. These databases are as follows: The capture database has a capture process and a queue. The capture process enqueues captured LCRs into the queue. Propagation A at the capture database propagates LCRs to a queue at destination database A. Propagation B at the capture database propagates LCRs to a queue at destination database B. Propagation C at the capture database propagates LCRs to a queue at destination database C. Destination database A contains a queue that accepts LCRs from propagation A. This database also contains an apply process that dequeues LCRs from the queue. Destination database B contains a queue that accepts LCRs from propagation B. This database also contains an apply process that dequeues LCRs from the queue. Destination database C contains a queue that accepts LCRs from propagation C. This database also contains an apply process that dequeues LCRs from the queue. Apply process Oracle Database 11g: Implement Streams I - 412

413 Oracle Database 11g: Implement Streams I - 413
Merge Streams: Original Configuration (continued) After the cloned_capture cloned capture process starts running, it captures changes that satisfy its rule sets from the acknowledged SCN forward. These changes are propagated by the cloned_prop_a propagation and processed by an apply process at the destination database. When the difference between CAPTURE_MESSAGE_CREATE_TIME in the GV$STREAMS_CAPTURE view of the cloned_capture cloned capture process and the original capture process is less than or equal to 60 seconds, the MERGE_STREAMS procedure is run automatically to merge the streams back together. The MERGE_STREAMS procedure performs the following actions: Stops the cloned capture process Stops the original capture process Re-creates the original propagation. This propagation propagates messages from the original queue to the existing destination queue used by the cloned propagation. The re-created propagation uses the same rule set as the cloned propagation. Starts the original capture process from the lower SCN value of these two SCN values: The acknowledged SCN of the cloned_prop_a cloned propagation The lowest acknowledged SCN of the other propagations that propagate changes captured by the original capture process When the capture process is started, it may recapture changes that it already captured, or it may capture changes that were already captured by the cloned_capture cloned capture process. In either case, the relevant apply processes will discard any duplicate changes they receive. Drops the cloned propagation Drops the cloned capture process Drops the cloned queue After the streams are merged, the Streams replication environment has the same components as it had before the split-and-merge operation. Note: If the original capture process is a downstream capture process, you must configure the cloned capture process to read the redo log from the source database before you start the cloned capture process. During merge, both the original and cloned capture processes are stopped. Only the original capture is started at the conclusion of the merge. Oracle Database 11g: Implement Streams I - 413

414 DBMS_STREAMS_ADM.SPLIT_STREAMS Method
( PROPAGATION_NAME IN VARCHAR2, CLONED_PROPAGATION_NAME IN VARCHAR2 DEFAULT NULL, CLONED_QUEUE_NAME IN VARCHAR2 DEFAULT NULL, CLONED_CAPTURE_NAME IN VARCHAR2 DEFAULT NULL, PERFORM_ACTIONS IN BOOLEAN DEFAULT TRUE, SCRIPT_NAME IN VARCHAR2 DEFAULT NULL, SCRIPT_DIRECTORY_OBJECT IN VARCHAR2 DEFAULT NULL, AUTO_MERGE_THRESHOLD IN NUMBER DEFAULT NULL  SCHEDULE_NAME    VARCHAR2  IN/OUT  MERGE_JOB_NAME   VARCHAR2  IN/OUT ); DBMS_STREAMS_ADM.SPLIT_STREAMS Method Parameters for DBMS_STREAMS_ADM.SPLIT_STREAMS are: PROPAGATION_NAME: The name of the propagation that cannot send messages to its destination queue. The specified propagation is the propagation for the stream that is being split off from the other streams. You must specify an existing propagation name. Do not specify an owner. CLONED_PROPAGATION_NAME: The name of the new propagation created by this procedure for the stream that is split off. If NULL, the system generates a propagation name. CLONED_QUEUE_NAME: The name of the new queue created by this procedure for the stream that is split off. If NULL, the system generates a queue name. CLONED_CAPTURE_NAME: The name of the new capture process created by this procedure for the stream that is split off. If NULL, the system generates a capture process name. PERFORM_ACTIONS: If TRUE, the procedure performs the necessary actions to split the stream directly. If FALSE, the procedure does not perform the necessary actions to split the stream directly. Specify FALSE when this procedure is generating a script that you can edit and then run. The procedure raises an error if you specify FALSE and either of the following parameters is NULL. Oracle Database 11g: Implement Streams I - 414

415 Oracle Database 11g: Implement Streams I - 415
DBMS_STREAMS_ADM.SPLIT_STREAMS Method (continued) SCRIPT_DIRECTORY_OBJECT: The directory object for the directory on the local computer system into which the generated script is placed. If the script_name parameter is NULL, the procedure ignores this parameter and does not generate a script. If NULL and the script_name parameter is non-NULL, the procedure raises an error. auto_merge_threshold: If a positive number is specified, the stream that was split off is automatically merged back into all the other streams flowing from the capture process. SCRIPT_NAME: If non-NULL and the perform_actions parameter is FALSE, specify the name of the script generated by this procedure. The script contains all the statements used to split the stream. If a file with the specified script name exists in the specified directory for the SCRIPT_DIRECTORY_OBJECT parameter, the procedure appends the statements to the existing file. If non-NULL and the perform_actions parameter is TRUE, the procedure generates the specified script and performs the actions to split the stream directly. If NULL and the perform_actions parameter is TRUE, the procedure performs the actions to split the stream directly and does not generate a script. If NULL and the perform_actions parameter is FALSE, the procedure raises an error. The value of the CAPTURE_MESSAGE_CREATE_TIME column for each capture process in the GV$STREAMS_CAPTURE dynamic performance view determines when the streams are merged. Specifically, if the difference, in seconds, between CAPTURE_MESSAGE_CREATE_TIME of the cloned capture process and the original capture process is less than or equal to the value specified for the AUTO_MERGE_THRESHOLD parameter, the two streams are merged automatically. The cloned capture process must be started before the split stream can be merged back with the original stream. If NULL or 0 (zero) is specified for the AUTO_MERGE_THRESHOLD parameter, the split stream is not merged back with the original stream automatically. To merge the split stream with the original stream, run the MERGE_STREAM procedure manually when the CAPTURE_MESSAGE_CREATE_TIME of the cloned capture process catches up to, or nearly catches up to, the CAPTURE_MESSAGE_CREATE_TIME of the original capture process. The CAPTURE_MESSAGE_CREATE_TIME records the time when a captured change was recorded in the redo log. SCHEDULE_NAME: The name of the schedule MERGE_JOB_NAME: The name of the job used for merging Oracle Database 11g: Implement Streams I - 415

416 DBMS_STREAMS_ADM.MERGE_STREAMS Method
CLONED_PROPAGATION_NAME IN VARCHAR2, PROPAGATION_NAME IN VARCHAR2 DEFAULT NULL, QUEUE_NAME IN VARCHAR2 DEFAULT NULL, PERFORM_ACTIONS IN BOOLEAN DEFAULT TRUE, SCRIPT_NAME IN VARCHAR2 DEFAULT NULL, SCRIPT_DIRECTORY_OBJECT IN VARCHAR2 DEFAULT NULL); DBMS_STREAMS_ADM.MERGE_STREAMS Method Parameters for DBMS_STREAMS_ADM.MERGE_STREAMS are: CLONED_PROPAGATION_NAME: The name of the cloned propagation used by the stream that was split off from the original stream by using the SPLIT_STREAMS procedure. The name of the cloned propagation also identifies the cloned queue and capture process used by the cloned propagation. You must specify an existing propagation name. Do not specify an owner. PROPAGATION_NAME: The name of the propagation that is merged back to the original stream. If NULL, the name of the original propagation in the original stream is used. Specify NULL only if the streams were split by using the SPLIT_STREAMS procedure. Specify a non-NULL value if you want to use a name that is different than the original propagation name or if you are merging two streams that were not split by the SPLIT_STREAMS procedure. If a non-NULL value is specified, an error is raised under either of the following conditions: The queue specified in the queue_name parameter does not exist. The queue specified in the queue_name parameter exists but is not used by a capture process. Oracle Database 11g: Implement Streams I - 416

417 Oracle Database 11g: Implement Streams I - 417
DBMS_STREAMS_ADM.MERGE_STREAMS Method (continued) QUEUE_NAME: The name of the queue that is the source queue for the propagation that is merged back. If NULL, the existing, original queue is the source queue for the propagation that is merged back. Specify NULL only if the streams were split by using the SPLIT_STREAMS procedure. Specify a non-NULL value if you are merging two streams that were not split by the SPLIT_STREAMS procedure. Specify the name of the existing queue used by the capture process that will capture changes in the merged stream. PERFORM_ACTIONS: If TRUE, the procedure performs the necessary actions to merge the streams directly. If FALSE, the procedure does not perform the necessary actions to merge the streams directly. Specify FALSE when this procedure is generating a script that you can edit and then run. The procedure raises an error if you specify FALSE and either SCRIPT_NAME or SCRIPT_DIRECTORY_OBJECT is NULL. script_name: If non-NULL and the perform_actions parameter is FALSE, specify the name of the script generated by this procedure. The script contains all the statements used to merge the streams. If a file with the specified script name exists in the specified directory for the script_directory_object parameter, the procedure appends the statements to the existing file. If non-NULL and the perform_actions parameter is TRUE, the procedure generates the specified script and performs the actions to split the stream directly. If NULL and the perform_actions parameter is TRUE, the procedure performs the actions to merge the streams directly and does not generate a script. If NULL and the perform_actions parameter is FALSE, the procedure raises an error. SCRIPT_DIRECTORY_OBJECT: The directory object for the directory on the local computer system into which the generated script is placed. If the script_name parameter is NULL, the procedure ignores this parameter and does not generate a script. If NULL and the script_name parameter is non-NULL, the procedure raises an error. Oracle Database 11g: Implement Streams I - 417

418 Using the MERGE_STREAMS_JOB Procedure
PROCEDURE MERGE_STREAMS_JOB Argument Name Type In/Out Default? CAPTURE_NAME VARCHAR2 IN CLONED_CAPTURE_NAME VARCHAR2 IN MERGE_THRESHOLD NUMBER IN SCHEDULE_NAME VARCHAR2 IN DEFAULT MERGE_JOB_NAME VARCHAR2 IN DEFAULT Using the MERGE_STREAMS_JOB Procedure The MERGE_STREAMS_JOB procedure determines whether the original capture process and the cloned capture are within the specified merge threshold. If they are within the merge threshold, this procedure runs the MERGE_STREAMS procedure to merge the two streams. If the auto_merge_threshold parameter was set to a positive number in the SPLIT_STREAMS procedure that split the streams, a merge job runs the MERGE_STREAMS_JOB procedure automatically according to its schedule. The schedule name is specified for the schedule_name parameter, and the merge job name is specified for the merge_job_name parameter when the MERGE_STREAMS_JOB procedure is run automatically. The merge job and its schedule were created by the SPLIT_STREAMS procedure. If the AUTO_MERGE_THRESHOLD parameter was set to NULL or 0 (zero) in the SPLIT_STREAMS procedure that split the streams, you can run the MERGE_STREAMS_JOB procedure manually. In this case, it is not run automatically. Oracle Database 11g: Implement Streams I - 418

419 Capture Process Checkpoints
868 890 924 957 992 1025 1053 First SCN Required checkpoint SCN Maximum checkpoint SCN Capture Process Checkpoints While a capture process is running, it tries to record a checkpoint at regular intervals. A checkpoint is a summary of the current state of the capture process. The checkpoint information is recorded in the data dictionary of the capture database. The checkpoint information includes the required checkpoint SCN, which is the lowest checkpoint SCN for which a capture process requires redo information. All redo log files, from the one that contains the required checkpoint SCN and onward, must be available to the capture process. If the instance is shut down and restarted, when the capture process restarts, it resumes mining the redo logs by using the required checkpoint SCN as its starting point. If the first SCN is reset for a capture process, the first SCN must be set to a value that is less than or equal to the required checkpoint SCN for the capture process. You can determine the required checkpoint SCN for a capture process by querying the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view. Oracle Database 11g: Implement Streams I - 419

420 Oracle Database 11g: Implement Streams I - 420
Capture Process Checkpoints (continued) The SCN value that corresponds to the last checkpoint recorded by a capture process is the maximum checkpoint SCN. If you create a capture process that captures changes from a source database, and if such other capture processes already exist (that capture changes from the same source database), the maximum checkpoint SCNs of the existing capture processes can help you decide whether the new capture process should create a new LogMiner data dictionary or share one of the existing LogMiner data dictionaries. You can determine the maximum checkpoint SCN for a capture process by querying the MAX_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view. Oracle Database 11g: Implement Streams I - 420

421 Managing Capture Process SCNs
868 890 924 957 992 1025 1053 1094 1117 First SCN Start SCN Required checkpoint SCN Maximum checkpoint SCN Managing Capture Process SCNs The first SCN is the lowest SCN in the redo log from which a capture process can capture changes. The start SCN is the SCN from which a capture process begins to capture changes. The start SCN for a capture process must be greater than or equal to the first SCN for the capture process. Neither the first SCN nor the start SCN are automatically adjusted except as a result of the maintenance performed based on the CHECKPOINT_RETENTION_TIME parameter. While a capture process is running, it tries to record a checkpoint at regular intervals. A checkpoint is a summary of the current state of the capture process. The checkpoint information is recorded in the data dictionary of the capture database. The checkpoint information includes the required checkpoint SCN, which is the lowest checkpoint SCN for which a capture process requires redo information. All redo log files, from the one that contains the required checkpoint SCN and onward, must be available to the capture process. If the instance is shut down and restarted, when the capture process restarts, it resumes mining the redo logs by using the required checkpoint SCN as its starting point. The checkpoint SCNs are automatically updated each time a capture process checkpoint completes. Oracle Database 11g: Implement Streams I - 421

422 Modifying FIRST_SCN and START_SCN
Increase the value of first_scn for a capture process to: Remove unneeded LogMiner checkpoint information from the LogMiner dictionary Ensure that old archived redo logs can be safely removed from disk Set start_scn for a capture process to a time in the past to recapture changes from the redo logs. EXEC DBMS_CAPTURE_ADM.ALTER_CAPTURE( - capture_name => 'CAPTURE1', - first_scn => ); Modifying FIRST_SCN and START_SCN You can use the CREATE_CAPTURE and ALTER_CAPTURE procedures of DBMS_CAPTURE_ADM to alter the values of the first_scn and start_scn parameters. The capture parameter CHECKPOINT_RETENTION_TIME, introduced in Oracle Database 10g Release 2, is modified by ALTER_CAPTURE and is the automatic way to maintain FIRST_SCN: Either the number of days that a capture process should retain checkpoints before purging them automatically, or DBMS_CAPTURE_ADM.INFINITE if checkpoints should not be purged automatically. If NULL, the checkpoint retention time is not changed. If a number of days is specified, a capture process purges a checkpoint this specified number of days after the checkpoint was taken. Partial days can be specified using decimal values. For example, .25 specifies 6 hours. When the first SCN is modified, the capture process purges checkpoint information from its LogMiner data dictionary that requires the capture process to restart at an earlier SCN. To alter the first SCN for a capture process, the new first_scn value must be: Greater than the current first SCN for the capture process Less than or equal to both the current applied SCN and the required checkpoint SCN for the capture process You can determine the SCN values for a capture process by querying the DBA_CAPTURE data dictionary view. Oracle Database 11g: Implement Streams I - 422

423 Oracle Database 11g: Implement Streams I - 423
Modifying FIRST_SCN and START_SCN (continued) The start_scn for a capture process can be reset to any SCN or date greater than the first_scn. You can set the start SCN to a past time to recapture changes from the redo logs. If you set the start_scn to a value greater than the first_scn, when the capture process restarts, it scans redo logs starting with the one containing the first_scn, but does not start enqueuing changes until the start_scn is reached. Therefore, you cannot skip the mining of redo log files by setting the start_scn for a capture process to a more recent SCN. Setting the start_scn of a previously running capture results in capturing STARTING from the required_checkpoint_scn but not enqueuing changes until the start_scn is reached. If the start_scn of a capture is modified after the creation of the capture but before capture is started for the very first time, it scans the redo logs starting with the one containing the FIRST_SCN , but does not start enqueuing changes until the start_scn is reached. It is strongly recommended to avoid setting the start_scn before starting the capture for the very first time. Oracle Database 11g: Implement Streams I - 423

424 Altering FIRST_SCN for a Capture Process
Start SCN Required checkpoint SCN Maximum checkpoint SCN 1 2 A B C D E F G H I J K Altering FIRST_SCN for a Capture Process When you reset a first SCN for a capture process, information below the new first SCN setting is purged from the LogMiner dictionary for the capture process automatically. Therefore, after the first SCN is reset for a capture process, the start SCN for the capture process cannot be set lower than the new first SCN. Also, redo log files that contain information before the new first SCN setting will never be needed by the capture process. In the first diagram in the slide, if the first SCN for a capture process is moved upward as shown, the redo logs A and B can be safely removed from the disk. If you specify a new first SCN value that is higher than the current start SCN for the capture process, the start SCN is set automatically to the new value of the first SCN. When the first SCN value is altered as shown in the second diagram in the slide, the value of the start SCN is adjusted to the same value. As a result, you can then safely remove redo logs C, D, and E from the disk. If you want to alter the start SCN for an existing capture process, the specified start SCN must be greater than or equal to the first SCN for the capture process. After increasing the first SCN as shown in the second diagram, the start SCN is equal to the first SCN. To specify a start SCN for a previous point in time, you can create a new capture process and specify a first SCN that corresponds to a previous data dictionary build in the redo log. Note: FIRST_SCN cannot be greater than REQUIRED_CHECKPOINT_SCN. Oracle Database 11g: Implement Streams I - 424

425 Removing Unnecessary Archived Log Files
The DBA_LOGMNR_PURGED_LOG data dictionary view lists the archived redo log files that are no longer needed by any capture process. SQL> SELECT * FROM DBA_LOGMNR_PURGED_LOG; FILE_NAME /u01/app/oracle/oradata/amer/archive/amer_1_415_ dbf /u01/app/oracle/oradata/amer/archive/amer_1_416_ dbf /u01/app/oracle/flash_recovery_area/ED_AMER/archivelog/2007_07_27/o1_mf_1_417_0jdz18wo_.arc Removing Unnecessary Archived Log Files After altering the first SCN for a capture process, the DBA_LOGMNR_PURGED_LOG data dictionary view lists the archived redo log files that are no longer needed by any capture process at the local database. These archived log files may be removed without affecting any existing capture process at the local database. A capture process must have access to the redo log file that includes the required checkpoint SCN, as well as access to all subsequent redo log files. When performing database backups, you should avoid removing archived log files that may be needed by the capture process. Oracle Database 11g: Implement Streams I - 425

426 Managing Streams with EM
By using the Oracle Enterprise Manager GUI tool, you can: Start and stop any capture process with a single click. By clicking the Parameters tab, you can also alter the parameters for a capture process. Easily enable or disable a propagation. You can also alter the propagation schedule by entering new values in the fields that are shown in the screenshot, and then clicking Apply. Alter the apply process parameters in a graphical format. You can also start and stop an apply process. The Overview subtab contains the following sections: Capture: Using the Capture section, you can: View the number of capture processes in the database and the number of capture processes with errors Drill down to see more information about the capture processes and manage the capture processes by clicking the number of capture process Drill down to see more information about the capture processes with errors by clicking the number of capture process with errors Oracle Database 11g: Implement Streams I - 426

427 Oracle Database 11g: Implement Streams I - 427
Managing Streams with EM (continued) Propagation: Using the Propagation section, you can: View the number of propagations in the database and the number of propagations with errors Drill down to see more information about the propagations and manage the propagations by clicking the number of propagations Drill down to see more information about the propagations with errors by clicking the number of propagations with errors Apply: Using the Apply section, you can: View the number of apply processes in the database and the number of apply processes Drill down to see more information about the apply processes and manage the apply processes by clicking the number of apply processes Drill down to see more information about the apply processes with errors by clicking the number of apply process with errors Messaging: Using the Messaging section, you can: View the number of queue tables in the database Drill down to see more information about the queue tables and manage the queue tables by clicking the number of queue tables View the number of queues in the database Drill down to see more information about the queues and manage the queues by clicking the number of queues View the total number of propagation errors Drill down to see more information about the propagation errors by clicking the number of propagation errors Oracle Database 11g: Implement Streams I - 427

428 Streams Dictionary Information
Streams dictionary information must be available at each destination site. Contains object numbers, object names, object types, column names, and other critical information for each source object Is stored in LogMiner tables Streams dictionary information is generated at the source site through: DBMS_CAPTURE_ADM.PREPARE_*_INSTANTIATION DBMS_STREAMS_ADM.ADD_*_RULE DBMS_CAPTURE_ADM.BUILD Streams Dictionary Information Propagations and apply processes use a Streams data dictionary to keep track of the database objects from a particular source database. A Streams data dictionary is populated whenever one or more database objects are prepared for instantiation at a source database. Specifically, when a database object is prepared for instantiation, its dictionary information is recorded in the redo log. When a capture process scans the redo log, it uses this information to populate the local Streams data dictionary for the source database. In the case of local capture, this Streams data dictionary is at the source database. In the case of downstream capture, this Streams data dictionary is at the downstream database. A Streams data dictionary is multiversioned. If a database has multiple propagations and apply processes, all of them use the same Streams data dictionary for a particular source database. A database can contain only one Streams data dictionary for a particular source database, but it can contain multiple Streams data dictionaries if it propagates or applies changes from multiple source databases. The Streams data dictionary maps object numbers, object version information, and internal column numbers from the source database into table names, column names, and column data types. This mapping keeps each captured event as small as possible because the event can store numbers rather than names internally. Oracle Database 11g: Implement Streams I - 428

429 Oracle Database 11g: Implement Streams I - 429
Streams Dictionary Information (continued) After an object has been prepared for instantiation, the local Streams data dictionary is updated when a DDL statement on the object is processed by a capture process. In addition, an internal message containing information about this DDL statement is captured and placed in the queue for the capture process. Propagations may then propagate these internal messages to destination queues at databases, and the information is used to update the Streams dictionaries at those sites. The Streams dictionary is created by default in the SYSAUX tablespace. In an Oracle9i database, the LogMiner tables are located in the SYSTEM tablespace by default. For an Oracle9i Release 2 database only, you must use the DBMS_LOGMNR_D.SET_TABLESPACE procedure to move the LogMiner tables to a different tablespace before any Streams configuration is performed. Oracle Database 11g: Implement Streams I - 429

430 Checking for Apply Reader Trace Files
If the destination Streams dictionary does not contain information for an object referenced in a captured event, it generates an error. Search the parallel execution server process trace file for the instance for the phrase: MISSING Streams multi-version data dictionary Determine what object is missing from the Streams dictionary. Missing dictionary information is not an apply error. grep MISSING *ap* Checking for Apply Reader Trace Files If you have not instantiated a shared table properly before attempting to apply the changes at the destination site, you could receive the error “MISSING Streams multi-version data dictionary.” To determine what objects are missing from the Streams dictionary at the destination site, search the apply reader trace files for the term MISSING. The apply readers are parallel execution server processes, so you need to search any trace file with names similar to ora_pnnn_<PID>.trc. After you have located a trace file with the word MISSING in it, open the trace file and find the line: MISSING Streams multi-version data dictionary. Use the information in the trace file to determine the missing objects: gdbnm Global name of the source database of the missing object objn Object number at the source database of the missing object objv Object number version at the source scn SCN for the transaction that has been missed The missing dictionary information is not an apply error. Rather, it is failure to properly instantiate the shared objects at the apply site. Oracle Database 11g: Implement Streams I - 430

431 Missing Multiversion Data Dictionary Information
To populate the Streams dictionary information at the apply site: Determine the scope of missing information (table, schema, or global) at the apply site At the source database, prepare for instantiation: BEGIN DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION('HR'); END; / Missing Multiversion Data Dictionary Information The source site can regenerate the information whenever the DBMS_CAPTURE_ADM.PREPARE_*_INSTANTIATION commands are issued with no ill effects at any site. Oracle Database 11g: Implement Streams I - 431

432 Oracle Database 11g: Implement Streams I - 432
Summary In this lesson, you should have learned how to: Split and merge Streams Alter a Streams environment Administer SCNs and the Streams data dictionary Oracle Database 11g: Implement Streams I - 432

433 Practice 14 Overview: Administering Streams Components
This practice covers the following topics: Verifying that the transformation and apply handler are performing correctly Starting the apply process and verifying the state of other processes Replicating data with Streams to test your configuration Practice 14 Overview: Administering Streams Components In this practice, you verify that your transformation and apply handler are performing correctly. Changes made in the OE schema in the AMER database are propagated to the EURO database, and then a queue-to-queue propagation applies the changes to the DEMO schema. Oracle Database 11g: Implement Streams I - 433

434 Result of Practice 14: Administering Streams Components
AMER database EURO database STRM01_CAPTURE STRM01_PROPAGATION STRM01_APPLY OE Q_CAPTURE OE. ORDERS Jnnn OE Q_APPLY OE.ORDERS STRM01_PROP_DEMO OE.LOG_PAID_ORDERS procedure STREAMS_QUEUE STRM01_APPLY_DEMO Result of Practice 14: Administering Streams Components In this practice, you started Streams processes, verified the flow of LCRs, and checked the results in the OE.ORDERS, DEMO.ORDERS and OE.ORDER_AUDIT tables of the EURO database. DEMO. ORDERS OE. ORDER_AUDIT Oracle Database 11g: Implement Streams I - 434

435 Data Conflicts

436 Oracle Database 11g: Implement Streams I - 436
Objectives After completing this lesson, you should be able to: Distinguish among the different types of data conflicts Describe how conflicts are detected List methods for avoiding data conflicts Oracle Database 11g: Implement Streams I - 436

437 What Is a Replication Conflict?
When two or more transactions at different databases update the same shared data, a conflict can occur. An error is generated by the apply engine if the row within the DML LCR: Does not exist at the destination site Has a before image for a column from the source site that does not match the current column image at the destination site Would violate a constraint if implemented at the destination site HR.EMPLOYEES What Is a Replication Conflict? Conflicts can occur in a Streams environment that permits concurrent DML operations on the same data at multiple sites. In a Streams environment, DML conflicts can occur only when an apply process is applying a message for a DML operation or a row LCR. An apply process automatically detects conflicts that are caused by row LCRs. For example, when two transactions originating from different databases update the same row in a table, a conflict can occur. When you configure a Streams environment, you must consider whether conflicts can occur. If your system design permits conflicts and a conflict occurs, you can configure automatic conflict resolution to resolve the conflict. In general, you must try to design a Streams environment that avoids the possibility of conflicts. Using several techniques, most system designs can avoid conflicts in all or a large percentage of the shared data. However, many applications require that some percentage of the shared data be updatable at multiple databases at any time. If this is the case, you must address the possibility of conflicts. Note: An apply process does not detect DDL conflicts or conflicts that result from user-enqueued messages. Make sure that your environment avoids these types of conflicts. Oracle Database 11g: Implement Streams I - 437

438 Conflict Example: Reservation System
UPDATE seats SET ticket_id='1QA872' WHERE seat_id='12A' AND flight=934; UPDATE seats SET ticket_id='4HW127' WHERE seat_id='12A' AND flight=934; SITE1 SITE2 Conflict Example: Reservation System When concurrent changes are allowed to the same data at multiple sites, replication conflicts can occur. The example in the slide shows a situation in which conflicts must be avoided at all costs. ? Oracle Database 11g: Implement Streams I - 438

439 Oracle Database 11g: Implement Streams I - 439
Error Queue UPDATE employees SET department_id=10 WHERE employee_id=115; UPDATE employees SET manager_id=108 WHERE employee_id=115; SITE1 SITE2 Error Queue There are many types of data changes that can lead to a conflict. In the example shown in the slide, the employee whose ID is 115 is currently in department 30. If a user updates the employee’s department ID to 10 at one site, but another user changes the manager ID for the same employee at another site, a data conflict can occur. Note Conflicting updates are not executed; they are redirected to the error queue at the destination site. The error queues hold all row LCRs (logical change records) for transactions that: Were dequeued by the apply process Resulted in a conflict or apply error Could not be resolved by a conflict handler or error handler, if implemented When you configure a Streams environment, you must anticipate where replication conflicts can occur. If your system design permits replication conflicts and a conflict occurs, the system data does not converge until the conflict is resolved in some way. If you do not implement conflict resolution, these conflicts are stored in the error queue until they are handled by the Streams administrator. ? Oracle Database 11g: Implement Streams I - 439

440 Types of Data Conflicts
Update Delete Uniqueness Foreign key Types of Data Conflicts An update conflict occurs when the apply process applies a row LCR that contains an update to a row that conflicts with another update to the same row. Update conflicts can happen when two transactions originating from different databases update the same row at nearly the same time. A delete conflict occurs when two transactions originate from different databases, with one transaction deleting a row and another transaction updating or deleting the same row. If the first transaction deletes the row, the row LCR of the second transaction now references a nonexistent row. If the first transaction updates the row, the column values in the row LCR of the second transaction no longer match the column values currently in the table, and the row to be deleted cannot be correctly identified. A uniqueness conflict occurs when the apply process applies a row LCR that contains a change to a row that violates a uniqueness integrity constraint, such as a primary key or unique constraint. For example, if two transactions originating from two different databases each attempt to insert a row into a table with the same primary key value, the transactions cause a uniqueness conflict. The conflict is detected by comparing the old value in the LCR (before image) with the existing value in the destination database. Oracle Database 11g: Implement Streams I - 440

441 Oracle Database 11g: Implement Streams I - 441
Types of Data Conflicts (continued) A foreign-key conflict occurs when the apply process attempts to apply a row LCR containing a change to a row that violates a foreign-key constraint. For example, in the HR schema, the department_id column of the EMPLOYEES table is a foreign key of the department_id column in the DEPARTMENTS table. Consider what can happen if the following changes originate at two different databases (SITE1 and SITE2) and are propagated to a third database (SITE3): At SITE1, a row is inserted into the DEPARTMENTS table with a department_id of 271. This change is propagated to database SITE2 and applied there. This change is also propagated directly to SITE3. At database SITE2, a row is inserted into the EMPLOYEES table with an employee_id of 206 and a department_id of 271. This change is then propagated directly to SITE3. If the change from SITE2 is applied to SITE3 before the change from SITE1 is applied to SITE3, a foreign-key conflict results because the row for department 271 does not yet exist in the DEPARTMENTS table at SITE3. This foreign-key conflict example also illustrates an ordering conflict. Ordering conflicts can occur in Streams when three or more databases share data and the data is updated at two or more databases. If you use irregular propagation schedules between the various databases, or if communication between two databases is interrupted for a significant period of time, ordering conflicts can occur. If an ordering conflict is encountered, you can resolve the conflict by reexecuting the transaction in the error queue after the other transactions involved in the ordering conflict have been applied. Note: Resolving data conflicts is covered in the lesson titled “Conflict Resolution.” Oracle Database 11g: Implement Streams I - 441

442 Identifying Rows in Conflict
Streams identifies a row by using: Primary key (default) Designated alternate key Substitute key (used first, if defined) Primary key serves as identifier column. Identifying Rows in Conflict To detect conflicts accurately, the Oracle Server must be able to identify and match corresponding rows uniquely at different sites during data replication. Typically, Streams uses the primary key of a table to identify rows in the table uniquely. When a table does not have a primary key, you must designate an alternate key (a column or set of columns) that the database can use to identify rows in the table during conflict detection. It is recommended that a unique index be created for the columns specified as the alternate key of the table. If there is neither a primary key, nor a unique index that has at least one NOT NULL column, nor a substitute key for a table, Streams uses as the identification key all columns in the table that are not of the LOB, LONG, or LONG RAW type. Alternate key columns must be supplementally logged at the source. Note When you designate an alternate key instead of a primary key, your application must guarantee uniqueness or you will encounter TOO_MANY_ROWS errors when updates are performed. You must not allow applications to update the key columns of a table. This ensures that the Oracle Server can identify rows and preserve the integrity of replicated data. Oracle Database 11g: Implement Streams I - 442

443 Specifying Substitute Key Columns
SQL> CONNECT SQL> BEGIN 2 DBMS_APPLY_ADM.SET_KEY_COLUMNS( 3 object_name => 'hr.employees', 4 column_list => 'first_name,last_name, hire_date'); 6 END; 7 / SQL> CONNECT AS SYSDBA SQL> ALTER TABLE hr.employees ADD 2 SUPPLEMENTAL LOG GROUP log_emp_supkey 3 (first_name,last_name,hire_date) ALWAYS; Specifying Substitute Key Columns The SET_KEY_COLUMNS procedure of DBMS_APPLY_ADM records the set of columns to be used as the substitute primary key for apply purposes and removes existing substitute primary key columns for the specified object (if they exist). Unlike true primary keys, the substitute key columns may contain NULLs. Also, the substitute key columns take precedence over any existing primary key for the specified table for all apply processes at the destination database. If you specify a substitute key for a table in a destination database, you must specify an unconditional supplemental log group for the substitute key columns at the source database, shown in the second example in the slide. Note It is recommended that each column that you specify as a substitute key column be a NOT NULL column. You must also create a single index that includes all the columns in a substitute key. Following these guidelines improves performance for updates, deletes, and piecewise updates to LOBs because the database can then locate the relevant row more efficiently. Oracle Database 11g: Implement Streams I - 443

444 Oracle Database 11g: Implement Streams I - 444
Detecting Conflicts An apply process detects: An update conflict if the old column values in an LCR do not match the current column values for the same row in the table A uniqueness conflict if a uniqueness constraint violation occurs when applying an LCR A delete conflict when a row with the same key column values as contained in a row LCR cannot be found in the destination table A foreign-key conflict if a foreign-key constraint violation occurs when applying an LCR Detecting Conflicts A conflict may be detected when an apply process attempts to apply an LCR directly to a table, or when an apply handler runs the EXECUTE member procedure for an LCR. A conflict can also be detected when either the EXECUTE_ERROR procedure or the EXECUTE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package is run. Oracle Database 11g: Implement Streams I - 444

445 Data Consistency and Convergence
Challenges Single Database Data integrity Local consistency Shared Data Sources Global data convergence Remedies Single Database Constraints Triggers Procedural objects Application code Shared Data Sources Avoid conflicts through workflow, application code, security mechanisms, and data partitioning. Resolve conflicts with handlers and procedures. Data Consistency and Convergence Being a database administrator, you are well aware of the challenges that data integrity poses. As you go one step further into a replication environment, you face more than just the sum of challenges of all participating databases. Your challenge consists of: Letting multiple databases coexist while retaining full local update autonomy Guaranteeing data convergence across all databases The remedies at your disposal are: Avoiding conflicts as much as possible Business models may support or contradict specific replication setups. It is important to choose your replication setup in a way that allows business processes to run smoothly while preventing as many conflicts as possible. Business models can be implemented in workflow processes, access limitations, data partitioning, and your application code. Your application code is probably the place where you have the most fine-grained control over data-change activities. You may have to modify your application code if declarative conflict avoidance and resolution methods are exhausted. This should be your last choice because procedural code is always more difficult to maintain than declarative code. Oracle Database 11g: Implement Streams I - 445

446 Oracle Database 11g: Implement Streams I - 446
Data Consistency and Convergence (continued) Resolving conflicts by specifying rules of priority Conflict resolution methods are declarative rather than procedural. They can be very powerful tools, as long as you understand their limitations. For example, a conflict resolution method that works perfectly for a system with two source sites may fail in a system with three or more source sites. Oracle Database 11g: Implement Streams I - 446

447 Conflict Avoidance and Resolution Foundation
Sound and robust logical design Well-documented object dependencies Well-documented application logic Clearly defined application and module boundaries Clearly defined update rights and responsibilities Well-understood workflow concept Conflict Avoidance and Resolution Foundation For a conflict avoidance and resolution concept to be successful, you need a solid foundation on which to build. Oracle Database 11g: Implement Streams I - 447

448 Conflict Resolution Responsibilities
Streams, DBA, programmer, user Resolving conflicts Streams Detecting DML conflicts DBA, programmer, user Avoiding conflicts Responsible Party Goal Conflict Resolution Responsibilities An apply process does not detect DDL conflicts or conflicts resulting from user-enqueued messages. It is the responsibility of the DBA or programmer to make sure that the Streams environment avoids these types of conflicts. Oracle Database 11g: Implement Streams I - 448

449 Avoiding Conflicts by Assigning Data Ownership
Primary site ownership Row-level subsetting Column-level subsetting Dynamic site ownership Workflow Token passing Shared ownership Avoiding Conflicts by Assigning Data Ownership In addition to basic database design practices (for example, normalization and modularization), there are additional requirements that you must investigate when building a database that operates in a Streams replication environment. Although it is usually not possible to design a system completely devoid of data conflicts, you can design your system to avoid certain types of conflicts. One method of avoiding conflicts is data ownership. You can avoid the possibility of conflicts by limiting the number of databases in the system that have simultaneous update access to the tables containing shared data. Applications can even use row and column subsetting to establish more granular ownership of data than at the table level. For example, applications may have update access to specific columns or rows in a shared table on a database-by-database basis. If a primary database ownership model is too restrictive for your application requirements, you can use a shared ownership data model, in which all sites are allowed to update the shared data, thus allowing conflicts to occur. Oracle Database 11g: Implement Streams I - 449

450 Primary Site Ownership
Row subsetting Table X Table X Site1 Site2 Table X Table X Column subsetting Primary Site Ownership A primary site ownership data model prevents all replication conflicts because only a single server permits updates to a set of replicated data. Rather than controlling the ownership of data at the table level, applications can employ data subsetting to establish more granular static ownership of data: Row subsetting limits changes to a specific range of rows in a shared table at each source site. By using column subsetting, you can change only a specific group of columns at each source site. Note: To implement column subsetting, you must define column lists. Column subsetting can be enforced with application logic, constraints, and triggers. Site1 Site2 Oracle Database 11g: Implement Streams I - 450

451 Dynamic Site Ownership: Workflow Method
Order entry site Orders table cust-no order-no status 162 163 164 579 1001 632 shippable billable open order Shipping site Billing site Orders table Orders table cust-no order-no status cust-no order-no status 162 163 164 579 1001 632 shippable billable open order 162 163 164 162 163 164 579 1001 632 shippable billable open order Dynamic Site Ownership: Workflow Method The dynamic site ownership data model is less restrictive than primary site ownership. With dynamic ownership, the capability to update a shared table moves from site to site, but only one site is allowed to update the shared data at any given instant. The concept of token passing uses a generalized approach to dynamic ownership. To implement token passing, shared tables typically have OWNER and EPOCH columns instead of the identifier columns: The OWNER column stores the global database name of the site that is currently believed to own the row. The EPOCH column stores an increasing, row-specific value that helps to determine which site became the owner last. It is as though each row has an individual clock that advances one minute every time the row is changed. When trying to find out who the current owner is, use the EPOCH column to find the most recent ownership. After a token-passing mechanism has been designed, you can use it to implement a variety of forms of dynamic subsetting of data ownership, including workflows. For example, in the diagram shown in the slide, an identifier column called STATUS is used to determine which site has ownership of the row. The applications used by each department read the status code of a product order—OPEN ORDER, SHIPPABLE, or BILLABLE—to determine when they can or cannot update the order. Oracle Database 11g: Implement Streams I - 451

452 Oracle Database 11g: Implement Streams I - 452
Dynamic Site Ownership: Workflow Method (continued) To successfully avoid conflicts, applications implementing dynamic data ownership must ensure that the following conditions are met: Only the owner of the row can update the row. The row is never owned by more than one site. Ordering conflicts can be successfully resolved at all sites. Oracle Database 11g: Implement Streams I - 452

453 Oracle Database 11g: Implement Streams I - 453
Shared Ownership HQ Orders Customer order entry Customer order entry Orders Orders Shared Ownership In a shared ownership data model, the same data can be updated at multiple sites at the same time, making it possible for update conflicts to occur. If you cannot design your system to avoid all update conflicts, you must understand the types of conflicts that are possible and then configure the system to resolve them if they occur. Oracle Streams can automatically resolve conflicts to ensure that over time the data converges to a consistent state at all sites in a multiple-source environment. An apply process does not detect DDL conflicts or conflicts resulting from user-enqueued messages. Make sure that your environment avoids these types of conflicts. If you decide to implement a shared ownership data model, you must design your system to avoid specific types of data conflicts. Oracle Database 11g: Implement Streams I - 453

454 Avoiding Conflicts: Overview
Uniqueness conflicts Delete conflicts Update conflicts Avoiding Conflicts: Overview You can use some simple strategies to avoid specific types of conflicts, such as delete and uniqueness conflicts. After trying to eliminate the possibility of uniqueness and delete conflicts, you must also try to limit the number of possible update conflicts. In a shared ownership data model, however, update conflicts cannot be avoided in all cases. If you cannot avoid all update conflicts, you must understand the types of possible conflicts and configure the system to resolve them if they occur. Oracle Database 11g: Implement Streams I - 454

455 Avoiding Uniqueness Conflicts
Use site-specific, primary-key components: Use sequences in identification keys: Concatenate a globally unique value (for example, DB name) to a local sequence. Create nonoverlapping individual sequences at each source site for use in identification keys. SQL> SELECT sys_guid() "UID" FROM dual; UID D4F888045E11D49A Avoiding Uniqueness Conflicts There are several ways to configure a replication environment that avoids uniqueness conflicts. Here are three options to consider: Use a globally unique value as the primary key (or unique) value. This avoids uniqueness conflicts on a global scale. You can generate such a value with the SYS_GUID()function: SQL> SELECT sys_guid() FROM dual; SYS_GUID() 4595EF13AB785E73E B40F58B This SQL operator returns a 16-byte globally unique identifier. The returned value is based on an algorithm that uses time, date, and machine identification information to generate a globally unique identifier. Use a composite primary key that includes a unique site identifier, or use a sequence with a site identifier appended to it as the primary or unique key. Create sequences at each source site so that each sequence generates a mutually exclusive set of sequence numbers. The sequences at each site must have different starting points but equal increments to avoid duplicates regardless of growth speed. Oracle Database 11g: Implement Streams I - 455

456 Avoiding Delete Conflicts
Do not delete rows in a shared ownership data model. Use a column to mark rows for deletion. Purge marked rows on a periodic basis. Avoiding Delete Conflicts Delete conflicts must be avoided in all replicated data environments. In general, applications that operate in an asynchronous, shared ownership data model must not delete rows by using DELETE statements. The best way to deal with deleting rows in a Streams replication environment is to avoid the conflict by marking a row for deletion and then periodically purging the table of all marked records. Because you are not physically removing this row, your data can converge at all Streams sites if a conflict arises because you still have two values to compare. After you verify that your data has converged, you can purge marked rows. In developing your user application, remember to ignore the rows that have been marked for deletion. For example, if rows are marked for deletion using a DATE column, design your queries to select only those rows that do not have a specified deletion date: SELECT * FROM employees WHERE remove_date IS NULL; Oracle Database 11g: Implement Streams I - 456

457 Suppressing Conflict Detection
SITE1 SITE2 ID Name Nickname ID Name Nickname 15 William Will Billy 15 William Will Bill Suppressing Conflict Detection By default, an apply process compares old values for all columns during conflict detection. You can stop conflict detection for non-key columns with the COMPARE_OLD_VALUES procedure of DBMS_APPLY_ADM. In the example in the slide, updating the same nickname at both sites would generate a conflict because the two values do not match. When the apply process at SITE2 attempts to apply the change from SITE1, it gets the old column values of 15 for ID and Will for NICKNAME from the LCR. Because the row in the destination table for ID 15 has already had the NICKNAME changed to Bill, the apply process fails to find a matching row. UPDATE ix.attendees SET nickname='Billy' WHERE id=15; UPDATE ix.attendees SET nickname='Bill' WHERE id=15; Oracle Database 11g: Implement Streams I - 457

458 Suppressing Conflict Detection
To specify that conflict detection must not be performed for the NICKNAME column, use the following code: BEGIN DBMS_APPLY_ADM.COMPARE_OLD_VALUES( object_name => 'IX.ATTENDEES', column_list => 'NICKNAME', operation => '*', compare => false); END; / Suppressing Conflict Detection (continued) You can use either a column_list parameter or a column_table parameter for this procedure: To input a comma-delimited list of the columns in the table, use the column_list parameter. There must be no spaces between entries. You can use '*' to include all non-key columns. If you want to supply the column names in a PL/SQL index-by table of the DBMS_UTILITY.LNAME_ARRAY type, use the column_table parameter. The first column name must be at position 1, the second at position 2, and so on. The table does not need to be NULL terminated. The operation parameter indicates the type of DML for which conflict detection must be disabled. It can be specified as: UPDATE DELETE * for both UPDATE and DELETE operations Use the apply_database_link parameter to disable conflict detection for columns at a non-Oracle database. Oracle Database 11g: Implement Streams I - 458

459 Cascading Delete Operations
DELETE FROM WHERE employee_id = 176; Delete cascade Delete cascade Streams Job_ History Job_ History Employees Employees Cascading Delete Operations In Oracle Database 10g, Oracle Streams supports DELETE CASCADE constraints across databases, increasing the flexibility of your Streams environment and reducing the administrative cost of Streams. Suppose there are two tables, EMPLOYEES and JOB_HISTORY, in the SITE1 and SITE2 databases. The JOB_HISTORY table has a foreign-key constraint that references the EMPLOYEE_ID column in the EMPLOYEES table. This constraint was created with the ON DELETE CASCADE clause, which means that the dependent rows in the JOB_HISTORY table are deleted when you delete the referenced primary value in the EMPLOYEES table. For example, if you remove an employee with an ID of 176, all job history records for that employee in the JOB_HISTORY table are also deleted as part of the same transaction. With a DELETE CASCADE constraint defined for the EMPLOYEES table, deleting a row in EMPLOYEES also generates redo for the rows deleted in JOB_HISTORY. The changes for both tables are captured by Streams if the proper rules are defined. When the row LCRs are applied at SITE2, rows in both the EMPLOYEES and JOB_HISTORY tables are deleted by the apply engine even if a similar DELETE CASCADE constraint exists at the destination database. Note: If your Streams environment includes one or more Oracle9i Release 2 databases, you must create separate code to implement cascading deletes on these databases. SITE1 SITE2 Oracle Database 11g: Implement Streams I - 459

460 Oracle Database 11g: Implement Streams I - 460
Summary In this lesson, you should have learned how to: Distinguish among the different types of data conflicts Describe how conflicts are detected List methods for avoiding data conflicts Oracle Database 11g: Implement Streams I - 460

461 Practice 15 Overview: Managing Data Conflicts
This practice covers the following topics: Creating a bi-directional schema replication as test environment Generating a data conflict Querying the error queue Resolving data conflicts manually Modifying shared objects without having the changes captured Using DBMS_APPLY_ADM.EXECUTE_ERROR Note: Completing this practice is a prerequisite for the following practice. Oracle Database 11g: Implement Streams I - 461

462 Result of Practice 15: Detecting a Data Conflict
HR.EMPLOYEES EURO database AMER database HR_CAP_Q HR schema HR_APPLY_Q HR_PROPAGATION HR_CAP HR_CAP_2 HR_APPLY HR_APPLY_2 HR_PROPAGATION_2 Oracle Database 11g: Implement Streams I - 462

463 Conflict Resolution

464 Oracle Database 11g: Implement Streams I - 464
Objectives After completing this lesson, you should be able to: Describe how data conflicts can be resolved automatically List the prebuilt conflict handlers that are available with Oracle Streams Specify a resolution column for resolving conflicts Create a column list Create a conflict handler for an apply process Manage conflict handlers in a database Oracle Database 11g: Implement Streams I - 464

465 Oracle Database 11g: Implement Streams I - 465
Conflict Resolution DML conflicts are always detected but not always resolved. Multiple, prebuilt update conflict handlers are available with Oracle Streams. You can implement your own conflict resolution procedures using apply or error handlers. The primary key or substitute key is used to identify column conflicts. You must configure supplemental logging at the source database for any columns that are involved in conflict resolution at the destination database. MAXIMUM Conflict Resolution To resolve conflicts in accordance with your business rules, Streams offers a variety of prebuilt handlers that you can use to implement a conflict resolution system for your database. If you have a unique situation that Oracle’s prebuilt update conflict handlers cannot resolve, you can build and use your own custom conflict resolution handlers in an error handler or DML handler. Whether you use prebuilt or custom conflict handlers, a conflict handler is applied as soon as a conflict is detected. If neither the specified conflict handler nor the relevant apply handler can resolve the conflict, the transaction is logged in the error queue. You can use the relevant apply handler to notify the Streams administrator when a conflict occurs or when a conflict cannot be resolved. Oracle Database 11g: Implement Streams I - 465

466 Prebuilt Conflict Handlers
For update conflicts, you can use the following prebuilt conflict resolution handlers: OVERWRITE DISCARD MAXIMUM MINIMUM There are no prebuilt conflict handlers for uniqueness, delete, ordering, or foreign-key conflicts. APnn LCR Apply changes Conflict handler Apply error queue Prebuilt Conflict Handlers Oracle Streams provides the following prebuilt conflict handlers: The OVERWRITE handler replaces the current row values in the table at the destination database with the new values in the row LCR from the source database. The DISCARD handler ignores the values in the row LCR from the source database and retains the row values in the table at the destination database. The MAXIMUM handler compares the new value in the row LCR from the source database with the current value of the designated resolution column in the table at the destination database for the row. If the new value of the resolution column in the LCR is greater than the current value of the resolution column in the table at the destination database, the apply process resolves the conflict by using the value in the LCR. If the new value of the resolution column in the LCR is less than the current value of the resolution column in the table, the apply process resolves the conflict by keeping the value of the resolution column in the table at the destination database. The MINIMUM handler works in a manner similar to the MAXIMUM conflict handler. It compares the new value in the row LCR from the source database with the current value of the designated resolution column in the table at the destination database, and keeps whichever change contains the lowest value for the column. Error handler Oracle Database 11g: Implement Streams I - 466

467 Oracle Database 11g: Implement Streams I - 467
Prebuilt Conflict Handlers (continued) If you want to resolve conflicts based on the time of the transactions that are involved, perform the following: 1. Add a column to a shared table to hold a time stamp marking the time when the last transaction modified the row. The column could be populated by a trigger that fires for INSERT or UPDATE commands. 2. Designate this column as a resolution column for a MAXIMUM conflict handler so that the transaction with the latest (or most recent) time stamp is used automatically. 3. Make sure that the column is supplementally logged at all source sites. When you use triggers to populate columns for conflict resolution purposes, verify that the trigger’s firing property is “fire once,” which is the default. Otherwise, the conflict resolution column may be updated when transactions are applied by an apply process, resulting in the loss of the actual time of the transaction. Oracle Database 11g: Implement Streams I - 467

468 Oracle Database 11g: Implement Streams I - 468
Resolution Columns You must specify a conflict resolution column for all prebuilt conflict handlers. The MAXIMUM and MINIMUM conflict handlers cannot resolve an update conflict by using the resolution column if: The new resolution column value in the LCR and the current resolution column value in the table are equal Either the new resolution column value in the LCR or the current resolution column value in the table is NULL Resolution Columns The resolution column is the column used to identify an update conflict handler. If you use a MAXIMUM or MINIMUM prebuilt update conflict handler, the resolution column is also the column used to resolve the conflict. The resolution column must be one of the columns in the column list for the handler. Note: The resolution column is not used for OVERWRITE or DISCARD conflict handlers, but a resolution column must still be specified when implementing these conflict handlers. You can use any column in the column list. Oracle Database 11g: Implement Streams I - 468

469 Oracle Database 11g: Implement Streams I - 469
Column Lists Characteristics of column lists: A column list can consist of one or more columns of a table. A column can belong to only one column list. Conflict resolution is applicable to only those columns in the column list defined for the conflict handler. Supplemental logging for columns in the column list is required. Column lists are specified when configuring an update conflict handler. Column Lists Each time you specify a prebuilt update conflict handler for a table, you must specify a column list. A column list is a list of columns for which the update conflict handler is invoked. If a conflict occurs in a column that is not in a column list, the error handler for the specific operation on the table attempts to resolve the conflict. If the error handler cannot resolve the conflict, or if there is no such error handler, the transaction that caused the conflict is moved to the error queue. The scope of conflict resolution is a single column list on a single row LCR: If an update conflict occurs for at least one of the columns in a column list when a row LCR is being applied, the associated update conflict handler is called to resolve the conflict. If a conflict occurs for a column in a column list that uses the OVERWRITE, MAXIMUM, or MINIMUM prebuilt handler, and if the row LCR does not contain all the columns in this column list, the conflict cannot be resolved because all the values are not available. In this case, the transaction is moved to the error queue. If the column list uses the DISCARD prebuilt method, the row LCR is simply discarded. Oracle Database 11g: Implement Streams I - 469

470 Oracle Database 11g: Implement Streams I - 470
Column Lists (continued) If conflicts occur in more than one column list when a row LCR is being applied, and if there are no conflicts in any columns that are not in a column list, the appropriate update conflict handler is invoked for each column list involved in a conflict. By using column lists, you can use different handlers to resolve conflicts for different types of data. For example, numeric data is often suitable for a MAXIMUM or MINIMUM update conflict handler, whereas an OVERWRITE or DISCARD update conflict handler may be preferred for character data. You must configure a conditional supplemental log group for all the columns (or source columns) of a column list. However, supplemental logging is not always required: Typically, a conditional supplemental log group is required for the columns in a column list only if it consists of more than one column and if the columns in the column list are affected by multiple columns at the source database. If an apply handler or rule-based transformation combines multiple columns from the source database into a single column in the column list at the destination database, a conditional supplemental log group is required for the columns in the column list even if there is only one column in the column list. If an apply handler or rule-based transformation separates one column from the source database into multiple columns in a column list at the destination database, no conditional supplemental log group is required even if there is more than one column in a column list. Note: Prebuilt update conflict handlers do not support LOB columns. Therefore, you must not include LOB columns in the column_list parameter when running the SET_UPDATE_CONFLICT_HANDLER procedure. Oracle Database 11g: Implement Streams I - 470

471 Oracle Database 11g: Implement Streams I - 471
Planning Column Lists Column list 1 MAXIMUM Table X Table X Shadow list 2 PK Col. A,B,C Col. D,E PK Col. A,B,C Col. D,E Site1 Site2 Place columns in the same list if they need to remain consistent with respect to each other. Planning Column Lists The planning of column lists is important when you design a conflict resolution solution because conflicts are always detected and resolved at the column list level. However, excessive separation of columns into column lists must be avoided. When resolving conflicts, certain columns always need to be consistent with each other. Because update conflict handlers are associated with column lists, separating columns into different column lists when they should be in the same column list can lead to inconsistent data. Example Assume that the following columns are used for the EMPLOYEES table: LAST_NAME (A), (B), JOB (C), MANAGER_ID (D), DEPARTMENT_ID (E), and LAST_UPDATE columns (not shown), none of which are part of the primary key. Assume that a column list has been created for the LAST_NAME, , JOB, and LAST_UPDATE columns, and this column list uses a MAXIMUM conflict handler to resolve conflicts by choosing the transaction with the most recent date. The MANAGER_ID (D) and DEPARTMENT_ID (E) columns are not in a column list (shadow list). Oracle Database 11g: Implement Streams I - 471

472 Oracle Database 11g: Implement Streams I - 472
Planning Column Lists (continued) Example (continued) Assume that when planning your Streams environment, you overlooked the fact that the JOB and DEAPRTMENT_ID columns need to be consistent with respect to each other because the job titles can change when employees transfer to new departments. As a result, the table data may diverge unexpectedly. For example, assume that a female employee was recently married, but did not change her legal name. A few months later, this employee is offered a position in a different department. She decides to change her last name at the same time that she accepts the new position. The following situation can occur: 1. A payroll administrator at SITE2 updates the MANAGER_ID (D), DEPARTMENT_ID (E), and JOB (C) columns for the employee in one transaction. 2. The LAST_NAME (A) and (B) columns are updated on SITE1 a few seconds later by a human resources administrator in an unrelated transaction. 3. When the row LCR containing the update to the JOB (C), MANAGER_ID (D), and DEPARTMENT_ID (E) columns is applied at SITE1, a conflict handler is triggered because a conflict is detected for the columns in the column list. Using the MAXIMUM resolution method with a time-stamp column as the resolution column, the change performed at SITE1 is kept and the change in the row LCR is discarded. No errors are signaled at SITE1. 4. When the row LCR containing the update to the LAST_NAME (A) and (B) columns is applied at SITE2, the conflict handler at SITE2 is triggered because a row with the same primary key and same old value for the JOB column is not found. The registered conflict handler for the column list determines that the change with the latest time stamp, the one performed at SITE1, must be retained. The new LAST_NAME and values are applied to the table and the JOB (C) column is reset to its original old value. No error is signaled at SITE2. However, because the MANAGER_ID and DEPARTMENT_ID columns are not in a column list (or being supplementally logged), the old values for these columns are not checked on SITE2 and thus the data conflict is not detected. The data in the two tables diverge. Site 1 Site 2 DILLY, JDILLY, SH_CLERK, 122, 50 DILLY, JDILLY, SH_CLERK, 122, 50 DILLY, JDILLY, SA_REP, 145, 80 STONE, JSTONE, SH_CLERK, 122, 50 DILLY, JDILLY, SA_REP, 145, 80 STONE, JSTONE, SH_CLERK, 122, 50 DILLY, JDILLY, SA_REP, 145, 80 STONE, JSTONE, SH_CLERK, 122, 50 STONE, JSTONE, SH_CLERK, 145, 80 Oracle Database 11g: Implement Streams I - 472

473 Specifying Column Lists
When configuring a conflict handler, use the COLUMN_LIST parameter to create a column list. DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'salary'; cols(2) := 'commission_pct'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.employees', method_name => 'MAXIMUM', resolution_column => 'salary', column_list => cols); END; Specifying Column Lists You specify a column list when configuring a conflict handler. The list of columns is passed in as a name array, as shown in the example in the slide. The column_list parameter is a list of columns for which the conflict handler is called. If a conflict occurs for one or more of the columns in the list when an apply process tries to apply a row LCR, the specified conflict handler is called to resolve the conflict. The conflict handler is not called if a conflict occurs only for columns that are not in the list. To remove a column list, use the DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER procedure. Specify the same parameters that you specify when you configure the conflict handler, but use NULL as the value for the column list. Note: Prebuilt update conflict handlers do not support LOB, LONG, LONG RAW, and user-defined type columns. Therefore, you must not include these types of columns in the column_list parameter when you run the SET_UPDATE_CONFLICT_HANDLER procedure. Oracle Database 11g: Implement Streams I - 473

474 Configuring Supplemental Logging
Places additional column data in the redo log, which is captured by the capture process Is required at the source database for the following columns at the destination database: Columns at the source database that have unique indexes at the destination database Conflict resolution columns that are used in a column list Columns that are used for a row subsetting condition Columns that are used by DML, error handlers, or rule-based transformations Configuring Supplemental Logging If you plan to use one or more apply processes to apply captured changes, you must enable supplemental logging at the source database for the types of columns listed in the slide. If you do not use supplemental logging for these types of columns at a source database, changes involving these columns may not apply properly at a destination database. There are additional situations that require supplemental logging. Refer to the Oracle Database 10g Streams Replication Administrator’s Guide for a complete list of the supplemental logging requirements for shared tables. To use supplemental logging for all primary key and unique key columns in a source database, issue the following SQL statement: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS; If your primary key and unique key columns are the same at all source and destination databases, running this command at the source database provides the supplemental logging that is needed for primary and unique key columns at all destination databases. Note: LOB, LONG, LONG RAW, and user-defined type columns cannot be part of a supplemental log group. Oracle Database 11g: Implement Streams I - 474

475 Resolving Conflicts with Prebuilt Update Conflict Handlers
If an apply process detects a conflict, the following events occur: For update conflicts, the apply process calls an appropriate update conflict handler, if defined. The apply process calls the error handler: For uniqueness or delete conflicts If an update conflict occurs on columns not in any column list If the update handler was not successful in resolving the update conflict If the error handler does not exist or is not successful, the apply process raises an error and places the transaction in the error queue. Resolving Conflicts with Prebuilt Update Conflict Handlers Streams provides prebuilt conflict handlers to resolve conflicts after they have been detected. You can also build your own custom conflict handler to resolve data conflicts that are specific to your business rules. When you share data between multiple databases and you want the data to be the same at all sites, make sure that you use conflict handlers that allow data to converge at all databases. Data convergence for a table is possible only if all databases are sharing all the data in the table with all the other databases in the environment. Also, in such an environment, the MAXIMUM conflict resolution method can guarantee convergence only if the values in the resolution column are always increasing. A time-based resolution column meets this requirement. The MINIMUM conflict resolution method can guarantee convergence in such an environment only if the values in the resolution column are always decreasing (such as the number of items left to ship). When using prebuilt update conflict handlers, the specified conflict handler is invoked as soon as a conflict is detected for one of the columns in the column list. If neither the specified conflict handler nor an error handler can resolve the conflict, the conflict is logged in the error queue. Oracle Database 11g: Implement Streams I - 475

476 Resolving Conflicts with Custom Conflict Handlers
Custom conflict handlers can be implemented as DML handlers or error handlers. A custom conflict handler can also invoke a prebuilt update conflict handler. Conflict handlers are executed before error handlers. If an error is not addressed, it goes into the apply error queue. Resolving Conflicts with Custom Conflict Handlers You can create a PL/SQL procedure to use as a custom conflict handler. Use the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package to designate one or more custom conflict handlers for a particular table. You can implement custom conflict handlers as either an error handler (1) or as a DML handler (2). Custom conflict handlers can be used in conjunction with prebuilt update conflict handlers. If you want an error handler to perform conflict resolution when an error is raised, set the error_handler parameter of the SET_DML_HANDLER procedure to TRUE. If you would rather include the conflict resolution handling in your DML handler, set the error_handler parameter to FALSE. If the custom conflict handler or the prebuilt update conflict handler cannot resolve the conflict, the apply process moves the transaction containing the conflict to the error queue and does not apply the transaction. Oracle Database 11g: Implement Streams I - 476

477 Oracle Database 11g: Implement Streams I - 477
Resolving Conflicts with Custom Conflict Handlers (continued) If both a prebuilt update conflict handler and a custom conflict handler exist for a particular object, the prebuilt update conflict handler is invoked only if both of the following conditions are met: The custom conflict handler executes the row LCR by using the EXECUTE member procedure for the LCR. The conflict_resolution parameter in the EXECUTE member procedure for the row LCR is set to TRUE. At the source database, you must specify an unconditional supplemental log group for the columns referenced by a custom conflict handler. Note: In past releases, an ORA error was returned for delete and update conflicts. In Oracle Database 11g, two new error messages make it easier to handle apply errors in DML handlers and error handlers. They appear on the error stack before the ORA error. An ORA error is raised if the row to be updated or deleted does not exist in the target table. An ORA error is raised when the row exists in the target table, but the values of some columns do not match those of the row LCR. If you have existing DML handlers and error handlers, you may need to modify them. Oracle Database 11g: Implement Streams I - 477

478 Configuring Conflict Resolution
Conflict resolution is configured for the apply process. Alter the table first to contain any needed columns. Configure supplemental logging for those columns. For update conflicts, use the DBMS_APPLY_ADM package: SET_UPDATE_CONFLICT_HANDLER. For delete or uniqueness conflicts, build a custom conflict handler as part of a DML or error handler. Conflict resolution cannot be specified for updates that are applied to a non-Oracle database. Configuring Conflict Resolution You can implement the latest time stamp update conflict handler for the HR.LOCATIONS table as follows: 1. Add a column to the table of the TIMESTAMP WITH TIME ZONE type. 2. Create a trigger to automatically populate the TIMESTAMP column when a row is inserted or updated in the table. 3. Configure supplemental logging for the HR.LOCATIONS table. You must include the primary-key columns and the TIMESTAMP column. 4. Call the DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER procedure to specify the MAXIMUM conflict handler, and specify a column list that includes the TIMESTAMP column. You must configure supplemental logging at the source database for all columns in the column list. When you apply changes to a non-Oracle database, update conflicts are detected, but these conflicts cannot be resolved through an update conflict handler. Oracle Database 11g: Implement Streams I - 478

479 Configuring Conflict Resolution: Example
DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'department_name'; cols(2) := 'manager_id'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.departments', method_name => 'DISCARD', resolution_column => 'department_name', column_list => cols); END; / At source database: Configuring Conflict Resolution: Example The example given in the slide implements update conflict resolution for the DEPARTMENTS table in the HR schema. Two of the table columns are referenced in a column list. These columns are also listed in the ALTER TABLE … ADD SUPPLEMENTAL LOG GROUP command, which is executed on the source database. The example assumes that the primary key for the table, the DEPARTMENT_ID column, is already supplementally logged. If a conflict occurs during the apply of a row LCR to the DEPARTMENTS table at this destination site, the DML operation within the LCR is discarded. The DEPARTMENT_NAME column is specified as the resolution column but is not used by the DISCARD conflict resolution handler. ALTER TABLE hr.departments ADD SUPPLEMENTAL LOG GROUP dept_conflict_cols (department_name, manager_id); Oracle Database 11g: Implement Streams I - 479

480 Oracle Database 11g: Implement Streams I - 480
Viewing Apply Errors Are there any errors in the error queue? SELECT apply_name, source_database, local_transaction_id, error_message FROM DBA_APPLY_ERROR; APPLY_NAME SOURCE_DATABASE LOCAL_TRANSACTION_ID ERROR_MESSAGE APPLY SITE1.NET ORA-00001: unique constraint (HR.COUNTRY_C_ID_PK_NOIOT) violated Viewing Apply Errors If the disable_on_error apply parameter is set to TRUE (default value), the apply process is disabled when it encounters an error. Check for any errors by querying DBA_APPLY_ERROR. If a transaction in the error queue consists of multiple LCRs, the message_number column of DBA_APPLY_ERROR indicates which LCR caused the error. If you need to view the contents of the LCR to determine the cause of the error, you can use the PRINT_TRANSACTION and PRINT_ERRORS procedures to display this information. After the error has been cleared, you can restart the apply process, but you may want to first change the value of the disable_on_error parameter. Here is an example: BEGIN DBMS_APPLY_ADM.SET_PARAMETER ( apply_name => 'APPLY_SITE1_LCRS', parameter => 'disable_on_error', value => 'N'); END; Oracle Database 11g: Implement Streams I - 480

481 Viewing Error Transaction Information
The error queue contains SYS.AnyData type messages. To view the message data for non-LCR messages in the error queue, you must extract the user-enqueued message from the SYS.AnyData type. To view the message data for LCRs in the error queue, you must extract the LCR object from the SYS.AnyData type. Non-LCR message data LCR message data Viewing Error Transaction Information The following page provides details about how to view and print LCR and non-LCR message data. SYS.AnyData LCR object Oracle Database 11g: Implement Streams I - 481

482 Printing Values from a SYS.AnyData Type
For non-LCR messages: To print the data contained within a SYS.AnyData type object, you must know the embedded data type. The GETTYPE and GETTYPENAME member functions of SYS.AnyData return the type code and type name (respectively) of the embedded data type. After the data type is known, you can use an appropriate GET member function to retrieve the data. Printing Values from a SYS.AnyData Type The possible type code return values are taken from the DBMS_TYPES package. You can use the constants in the DBMS_TYPES package to extract the data types, or you can find the value codes by viewing the $ORACLE_HOME/rdbms/admin/dbmsany.sql file. The common data types are: DATE := 12; NUMBER := 2; RAW := 95; CHAR := 96; VARCHAR2 := 9; BLOB := 113; CLOB := 112; BFILE := 114; TIMESTAMP := 187; TIMESTAMP_TZ := 188; TIMESTAMP_LTZ := 232; OBJECT := 108; NCHAR := 286; NVARCHAR2 := 287; After you know the data type, use the appropriate GET member function to view the value of the data embedded in the SYS.AnyData type object: GETDATE GETNUMBER GETRAW GETCHAR GETVARCHAR2 GETBLOB GETCLOB GETBFILE There are additional functions for data types not shown here. Oracle Database 11g: Implement Streams I - 482

483 Oracle Database 11g: Implement Streams I - 483
Printing Values from a SYS.AnyData Type (continued) CREATE OR REPLACE PROCEDURE print_any(data IN SYS.AnyData) IS tn VARCHAR2(61); str VARCHAR2(4000); chr VARCHAR2(1000); num NUMBER; dat DATE; rw RAW(4000); res NUMBER; BEGIN IF data IS NULL THEN DBMS_OUTPUT.PUT_LINE('NULL value'); RETURN; END IF; tn := data.GETTYPENAME(); IF tn = 'SYS.VARCHAR2' THEN res := data.GETVARCHAR2(str); DBMS_OUTPUT.PUT_LINE(SUBSTR(str,0,253)); ELSIF tn = 'SYS.CHAR' then res := data.GETCHAR(chr); DBMS_OUTPUT.PUT_LINE(SUBSTR(chr,0,253)); ELSIF tn = 'SYS.VARCHAR' THEN res := data.GETVARCHAR(chr); DBMS_OUTPUT.PUT_LINE(chr); ELSIF tn = 'SYS.NUMBER' THEN res := data.GETNUMBER(num); DBMS_OUTPUT.PUT_LINE(num); ELSIF tn = 'SYS.DATE' THEN res := data.GETDATE(dat); DBMS_OUTPUT.PUT_LINE(dat); ELSIF tn = 'SYS.RAW' THEN -- res := data.GETRAW(rw); -- DBMS_OUTPUT.PUT_LINE(SUBSTR(DBMS_LOB.SUBSTR(rw),0,253)); DBMS_OUTPUT.PUT_LINE('BLOB Value'); ELSIF tn = 'SYS.BLOB' THEN DBMS_OUTPUT.PUT_LINE('BLOB Found'); ELSE DBMS_OUTPUT.PUT_LINE('typename is ' || tn); END print_any; / Oracle Database 11g: Implement Streams I - 483

484 Oracle Database 11g: Implement Streams I - 484
Printing an LCR An LCR message can be of the LCR$_ROW_RECORD or LCR$_DDL_RECORD type. To print details about an LCR, you must: Determine the type of the LCR Extract the LCR into an object type Use LCR member methods to display the LCR attributes Loop through the nested table of column values for old and new values Use the PRINT_ANY procedure to print each column value Printing an LCR The first step in displaying the contents of an LCR is to determine which type of LCR you are printing by calling the GETTYPENAME function. To print the top-level attributes of the LCR, there are several member functions you can call to retrieve this information, such as GET_SOURCE_DATABASE_NAME. The old and new column values of a row LCR are stored in nested tables of the LCR$_ROW_UNIT object type. To display these values, you must loop through each nested table and print out the value for each column. Because the LCR$ROW_LIST_UNIT type stores the individual column values in a SYS.AnyData type, you must use the PRINT_ANY procedure shown on the previous page to print out the column values. Here is a sample procedure that you can use to print the contents of an LCR. This procedure was copied from the Oracle Streams Concepts and Administration 10g documentation and has been modified to remove the code for printing extra attribute information: CREATE OR REPLACE PROCEDURE print_lcr(lcr IN SYS.ANYDATA) IS typenm VARCHAR2(61); ddllcr SYS.LCR$_DDL_RECORD; proclcr SYS.LCR$_PROCEDURE_RECORD; rowlcr SYS.LCR$_ROW_RECORD; Oracle Database 11g: Implement Streams I - 484

485 Oracle Database 11g: Implement Streams I - 485
Printing an LCR (continued) res NUMBER; newlist SYS.LCR$_ROW_LIST; oldlist SYS.LCR$_ROW_LIST; ddl_text CLOB; ext_attr SYS.AnyData; BEGIN typenm := lcr.GETTYPENAME(); DBMS_OUTPUT.PUT_LINE('type name: ' || typenm); IF (typenm = 'SYS.LCR$_DDL_RECORD') THEN res := lcr.GETOBJECT(ddllcr); DBMS_OUTPUT.PUT_LINE('source database: ' || ddllcr.GET_SOURCE_DATABASE_NAME); DBMS_OUTPUT.PUT_LINE('owner: ' || ddllcr.GET_OBJECT_OWNER); DBMS_OUTPUT.PUT_LINE('object: ' || ddllcr.GET_OBJECT_NAME); DBMS_OUTPUT.PUT_LINE('is tag null: ' || ddllcr.IS_NULL_TAG); DBMS_LOB.CREATETEMPORARY(ddl_text, true); ddllcr.GET_DDL_TEXT(ddl_text); DBMS_OUTPUT.PUT_LINE('ddl: ' || ddl_text); -- Print extra attributes in DDL LCR <put code here> DBMS_LOB.FREETEMPORARY(ddl_text); ELSIF (typenm = 'SYS.LCR$_ROW_RECORD') THEN res := lcr.GETOBJECT(rowlcr); rowlcr.GET_SOURCE_DATABASE_NAME); DBMS_OUTPUT.PUT_LINE('owner: ' || rowlcr.GET_OBJECT_OWNER); DBMS_OUTPUT.PUT_LINE('object: ' || rowlcr.GET_OBJECT_NAME); DBMS_OUTPUT.PUT_LINE('is tag null: ' || rowlcr.IS_NULL_TAG); DBMS_OUTPUT.PUT_LINE('command_type: ' || rowlcr.GET_COMMAND_TYPE); oldlist := rowlcr.GET_VALUES('old'); FOR i IN 1..oldlist.COUNT LOOP IF oldlist(i) IS NOT NULL THEN DBMS_OUTPUT.PUT_LINE('old('||i|| '): ' ||oldlist(i).column_name); print_any(oldlist(i).data); END IF; END LOOP; newlist := rowlcr.GET_VALUES('new', 'n'); FOR i in 1..newlist.count LOOP IF newlist(i) IS NOT NULL THEN DBMS_OUTPUT.PUT_LINE('new('||i|| '): ' ||newlist(i).column_name); print_any(newlist(i).data); -- Print extra attributes in row LCR ELSE DBMS_OUTPUT.PUT_LINE('Non-LCR Message with type ' || typenm); END print_lcr; / Oracle Database 11g: Implement Streams I - 485

486 Oracle Database 11g: Implement Streams I - 486
Managing Errors After the error condition is resolved, transactions in the error queue can be reexecuted: As the user who originally received the LCR As the current user Any relevant apply handlers are invoked when the transaction is retried. To discard the message, delete the transaction. EXEC DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS( - apply_name => 'APPLY_SITE1_LCRS', - execute_as_user => TRUE); Managing Errors To view information about transactions in the error queue, query the DBA_APPLY_ERROR data dictionary view. To view the contents of the message in the error queue, you can use the DBMS_APPLY_ADM.GET_ERROR_MESSAGE function, or you can implement the PRINT_ERRORS or PRINT_TRANSACTION procedures that are documented in Oracle Streams Concepts and Administration. After the condition that caused the error has been resolved, you can either reexecute the transaction in the error queue or delete the transaction from the error queue by using procedures in the DBMS_APPLY_ADM package: Retry a single transaction by calling EXECUTE_ERROR. Indicate the transaction to be reexecuted by specifying the value local_transaction_id for the LCR: DBMS_APPLY_ADM.EXECUTE_ERROR(' ') Correct the problem and retry all the transactions for a particular apply process with the following command: DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS('<Apply Name>'); Remove the transaction from the error queue: DBMS_APPLY_ADM.DELETE_ERROR(' ') Oracle Database 11g: Implement Streams I - 486

487 Oracle Database 11g: Implement Streams I - 487
Managing Errors (continued) When executing error transactions, if you specify an alternate user with the execute_as_user parameter, the user who executes the transactions must have privileges to perform DML and DDL changes on the apply objects and to run any apply handlers. This user must also have dequeue privileges on the queue used by the apply process. If you must modify an LCR before it is reexecuted, you can configure a DML handler to process the LCR in the error queue. For example, the DML handler may modify the LCR in some way to avoid a repetition of the same error. When you reexecute an LCR from the error queue with the EXECUTE_ERROR or EXECUTE_ALL_ERRORS procedure, the LCR is passed to any appropriate DML handlers by the apply engine. If you specify the USER_PROCEDURE parameter when executing EXECUTE_ERROR, the error is handled by the user-defined method specified. This is the recommended method to handle errors rather than using DML handlers. Oracle Database 11g: Implement Streams I - 487

488 Using Procedures to Execute LCRs
last enqueued Error fixed Apply process LCR in error queue EXECUTE Using Procedures to Execute LCRs In previous releases, the EXECUTE member procedure could only be called for a row LCR that was being processed by an apply or error handler. Now the EXECUTE member procedure can be called for a row LCR under the following additional conditions: The LCR is in a queue and was last enqueued by an apply process, an application, or a user. The LCR is in the error queue. You can use this procedure to test a DML or error handler that invokes the EXECUTE member function in a SQL session. You can also use this feature to construct LCRs and execute them in a specific transactional context. Also, by using the user_procedure parameter in the EXECUTE_ERROR procedure of the DBMS_APPLY_ADM package, you can specify a user procedure that modifies one or more LCRs in an error transaction before the transaction is executed. To retry a transaction, you provide the transaction identifier and the name of the procedure to use. For example, suppose you have an error in the error queue for an apply process and the error was caused by a mismatch in the old values for the same row between the source site and the destination site. The transaction that errored out consisted of around 200 individual DML operations. By querying the message_number column of DBA_APPLY_ERROR, you determined that the particular LCR that is causing the error is the 107th message in the transaction. Oracle Database 11g: Implement Streams I - 488

489 Oracle Database 11g: Implement Streams I - 489
Using Procedures to Execute LCRs (continued) This example shows how to create a procedure, which modifies an LCR. An incoming LCR (in this example, the 107th message from an apply error) has the SALARY value modified and is prepared to be applied by the second procedure on this page: CREATE OR REPLACE PROCEDURE strmadmin.fix_emp_salary (in_any IN SYS.AnyData, err_rec IN DBA_APPLY_ERROR%ROWTYPE, err_num IN NUMBER, messaging_default_processing IN OUT BOOLEAN, out_any OUT SYS.AnyData) AS typenm VARCHAR2(61); rowlcr SYS.LCR$_ROW_RECORD; res NUMBER; BEGIN -- Set OUT data to initial LCR out_any := in_any; -- Check to make sure we have the right LCR -- within the error transaction IF (err_num = 107) THEN Extract LCR from AnyData type res := in_any.GETOBJECT(rowlcr); Modify old value to match current value in the target table rowlcr.SET_VALUE('old', 'SALARY', Sys.AnyData.ConvertNumber(8000)); Specify that the apply process continues to process the current message messaging_default_processing := TRUE; -- Set OUT data to modified LCR out_any := Sys.AnyData.ConvertObject(rowlcr); END IF; -- If this procedure was called for any LCR other than -- the one needing to be modified, nothing is changed. END; / After the procedure has been created, you can use it to correct an LCR within a transaction in the error queue. First, you identify the local transaction ID for the transaction by querying DBA_APPLY_ERRORS and then you call the EXECUTE_ERROR procedure, as shown here: BEGIN DBMS_APPLY_ADM.EXECUTE_ERROR( local_transaction_id => ' ', execute_as_user => false, user_procedure => 'strmadmin.fix_emp_salary'); END; / Oracle Database 11g: Implement Streams I - 489

490 Viewing Conflict Resolution Information
Prebuilt update conflict handler configuration information: [ALL | DBA]_APPLY_CONFLICT_COLUMNS Custom conflict handler configuration details: [ALL |DBA]_APPLY_DML_HANDLERS Information about error transactions: [ALL |DBA]_APPLY_ERROR Columns for which conflict detection has been suppressed: [ALL |DBA]_APPLY_TABLE_COLUMNS Number of messages that resulted in apply errors: V$STREAMS_APPLY_COORDINATOR Viewing Conflict Resolution Information The [ALL | DBA]_APPLY_CONFLICT_COLUMNS view displays information about prebuilt update conflict handlers configured for table columns. The [ALL | DBA]_APPLY_DML_HANDLERS view lists the DML and error handlers configured for a database table. The associated user procedure may be used to perform custom conflict resolution. The [ALL | DBA]_APPLY_ERROR view displays information about error transactions generated by the apply processes. The [ALL | DBA]_APPLY_TABLE_COLUMNS view displays information about the destination table columns. It also shows whether apply should compare the old value of the table column to the old values in the LCR when performing deletes or updates on the table. The V$STREAMS_APPLY_COORDINATOR view has a column that shows the number of transactions applied by the apply process that resulted in an apply error since the apply process was last started. Oracle Database 11g: Implement Streams I - 490

491 Oracle Database 11g: Implement Streams I - 491
Summary In this lesson, you should have learned how to: Describe how data conflicts can be resolved automatically List the prebuilt conflict handlers that are available with Oracle Streams Specify a resolution column for resolving conflicts Create a column list Create a conflict handler for an apply process Manage conflict handlers in a database Oracle Database 11g: Implement Streams I - 491

492 Practice 16 Overview: Implementing a Conflict Handler
This practice covers the following topics: Source site Altering a table to add a conflict resolution column Configuring supplemental logging for the table to support a conflict handler Destination site Creating a conflict handler Verifying propagation of source site DDL changes Altering apply to implement the conflict handler Querying data dictionary tables for information about conflict handlers Practice 16 Overview: Implementing a Conflict Handler After you configure automatic conflict resolution, you generate another data conflict and verify that conflict resolution was used, resulting in consistent data on both sites. Oracle Database 11g: Implement Streams I - 492

493 Result of Practice 16: Implementing a Conflict Handler
HR.EMPLOYEES MAXIMUM Containing queue and exception queue EURO database AMER database HR_CAP_Q HR schema HR_APPLY_Q HR_PROPAGATION HR_CAP HR_CAP_2 HR_APPLY HR_APPLY_2 HR_PROPAGATION_2 Oracle Database 11g: Implement Streams I - 493

494 Comparing Data Between Databases

495 Oracle Database 11g: Implement Streams I - 495
Objectives After completing this lesson, you should be able to perform data convergence and comparisons. Oracle Database 11g: Implement Streams I - 495

496 Oracle Database 11g: Implement Streams I - 496
Comparing Table Data The DBMS_COMPARISON package enables you to: Compare database objects in multiple databases and identify differences Compare and converge: Tables Single-table views Materialized views Synonyms for tables, single-table views, and materialized views Converge the database objects so that they are consistent at different databases Comparing Table Data In Oracle Database 11g, a new Oracle-supplied package called DBMS_COMPARISON enables you to compare two tables of similar structure across one or more databases. This feature can be used in Oracle Streams to verify that replicated tables in two databases are in sync. The DBMS_COMPARISON package provides the capability to validate the data in two or more databases and verify that they match. For example, if the database object is a table, one instance of the table may have more rows than another instance of the table, or two instances of the table may have different data in the same rows. If differences are found in the database object, this package can converge the database objects so that they are consistent. The DBMS_COMPARISON package can compare and converge the following types of database objects: Tables Single-table views Materialized views Synonyms for tables, single-table views, and materialized views Oracle Database 11g: Implement Streams I - 496

497 Oracle Database 11g: Implement Streams I - 497
Comparing Table Data (continued) Database objects of different types can be compared and converged in different databases. For example, a table in one database and a materialized view in another database can be compared and converged. The DBMS_COMPARISON package provides flexibility for differences in the shared database object in different databases. The database objects being compared do not need to have the same name. In addition, column names can differ in the database objects, as long as the corresponding columns are of the same data type. Subsets of columns and rows of a table can be compared using views. Oracle Database 11g: Implement Streams I - 497

498 Performing Comparisons
You perform comparisons by using: Scans A scan checks for differences in some or all rows in a shared database object at a single point in time. A unique scan ID identifies each scan in the comparison results. Buckets A bucket is a range of rows in a database object that is being compared. Buckets improve performance by splitting the database object into ranges and comparing the ranges independently. Performing Comparisons You create a comparison between two database objects by using the CREATE_COMPARISON procedure in the DBMS_COMPARISON package. After you create a comparison, you can run the comparison at any time by using the COMPARE function. When you run the COMPARE function, it records comparison results in the appropriate data dictionary views. Separate comparison results are generated for each execution of the COMPARE function. Scans Each time the COMPARE function is run, one or more new scans are performed for the specified comparison. A scan checks for differences in some or all rows in a shared database object at a single point in time. The comparison results for a single execution of the COMPARE function can include one or more scans. You can compare database objects multiple times, and a unique scan ID identifies each scan in the comparison results. Buckets A bucket is a range of rows in a database object that is being compared. Buckets improve performance by splitting the database object into ranges and comparing the ranges independently. Every comparison divides the rows being compared into an appropriate number of buckets. The number of buckets used depends on the size of the database object and is always less than the maximum number of buckets specified for the comparison by the max_num_buckets parameter in the CREATE_COMPARISON procedure. Oracle Database 11g: Implement Streams I - 498

499 Oracle Database 11g: Implement Streams I - 499
Performing Comparisons (continued) When a bucket is compared using the COMPARE function, the following results are possible: No differences are found. In this case, the comparison proceeds to the next bucket. Differences are found. In this case, the comparison can split the bucket into smaller buckets and compare each smaller bucket. When differences are found in a smaller bucket, the bucket is split into still smaller buckets. This process continues until the minimum number of rows allowed in a bucket is reached. The minimum number of rows in a bucket for a comparison is specified by the min_rows_in_bucket parameter in the CREATE_COMPARISON procedure. When the minimum number of rows in a bucket is reached, the COMPARE function reports whether there are differences in the bucket. The COMPARE function includes the perform_row_dif parameter. This parameter controls whether the COMPARE function identifies each row difference in a bucket that has differences. When this parameter is set to TRUE, the COMPARE function identifies each row difference. When this parameter is set to FALSE, the COMPARE function does not identify specific row differences. Instead, it only reports that there are differences in the bucket. You can adjust the max_num_buckets and min_rows_in_bucket parameters in the CREATE_COMPARISON procedure to achieve the best performance when comparing a particular database object. After a comparison is created, you can view the bucket specifications for the comparison by querying the MAX_NUM_BUCKETS and MIN_ROWS_IN_BUCKET columns in the DBA_COMPARISON data dictionary view. The DBMS_COMPARISON package uses the ORA_HASH function on the specified columns in all the rows in a bucket to compute a hash value for the bucket. If the hash values for two corresponding buckets match, the contents of the buckets are assumed to match. The ORA_HASH function is an efficient way to compare buckets because row values are not transferred between databases. Instead, only the hash value is transferred. Parent Scans and Root Scans Each time the COMPARE function splits a bucket into smaller buckets, it performs new scans of the smaller buckets. The scan that analyzes a larger bucket is the parent scan of each scan that analyzes the smaller buckets into which the larger bucket was split. The root scan in the comparison results is the highest-level parent scan. The root scan does not have a parent. You can identify parent and root scan IDs by querying the DBA_COMPARISON_SCAN_SUMMARY data dictionary view. You can recheck a scan by using the RECHECK function, and you can converge a scan by using the CONVERGE procedure. When you want to recheck or converge all the rows in the comparison results, specify the root scan ID for the comparison results in the appropriate subprogram. When you want to recheck or converge a portion of the rows in the comparison results, specify the scan ID of the scan that contains the differences. For example, a scan with differences in 20 buckets is the parent scan for 20 additional scans, assuming that each bucket with differences has more rows than the specified minimum number of rows in a bucket for the comparison. To view the minimum number of rows in a bucket for the comparison, query the MIN_ROWS_IN_BUCKET column in the DBA_COMPARISON data dictionary view. Oracle Database 11g: Implement Streams I - 499

500 Oracle Database 11g: Implement Streams I - 500
Creating a Comparison BEGIN DBMS_COMPARISON.CREATE_COMPARISON( comparison_name => 'compare_departments', schema_name => 'hr', object_name => 'departments', dblink_name => 'euro'); END; / Database A Database B Scan ID Creating a Comparison You use the CREATE_COMPARISON procedure in the DBMS_COMPARISON package to define a comparison of a shared database object in two different databases. After the comparison is defined, you can use the COMPARE function in this package to compare the database object specified in the comparison at the current point in time. You can run the COMPARE function multiple times for a specific comparison. Each time you run the function, it results in a scan of the database objects, and each scan has its own scan ID. To compare the entire HR.DEPARTMENTS table in the amer and euro databases, you can use the example shown. The name of the new comparison is compare_departments. This comparison is owned by the user who runs the CREATE_COMPARISON procedure. The CREATE_COMPARISON procedure identifies one or more index columns in the shared database object. The DBMS_COMPARISON package must be able to identify at least one column that it can use as an index column. If the specified database object does not have a column that can be used as an index column, the CREATE_COMPARISON procedure cannot create a comparison for the database object. Oracle Database 11g: Implement Streams I - 500

501 Oracle Database 11g: Implement Streams I - 501
Creating a Comparison (continued) Parameters for the CREATE_COMPARISON Procedure COMPARISON_NAME: The name of the comparison SCHEMA_NAME: The name of the schema that contains the local database object to compare OBJECT_NAME: The name of the local database object to compare DBLINK_NAME: The database link to the remote database. The specified database object in the remote database is compared with the database object in the local database. If NULL, then the comparison is configured to compare two database objects in the local database. In this case, parameters that specify the remote database object apply to the second database object in the comparison and to operations on the second database object. For example, specify the second database object in this procedure by using the REMOTE_SCHEMA_NAME and REMOTE_OBJECT_NAME parameters. INDEX_SCHEMA_NAME: The name of the schema that contains the index. If NULL, then the schema specified in the SCHEMA_NAME parameter is used. INDEX_NAME: The name of the index. If NULL, then the system determines the index columns for the comparison automatically. If the INDEX_SCHEMA_NAME parameter is non-NULL, the INDEX_NAME parameter must also be non-NULL. Otherwise, an error is raised. REMOTE_SCHEMA_NAME: The name of the schema that contains the database object at the remote database. Specify a non-NULL value if the schema names are different at the two databases. If NULL, the schema specified in the SCHEMA_NAME parameter is used. REMOTE_OBJECT_NAME: The name of the database object at the remote database. Specify a non-NULL value if the database object names are different at the two databases. If NULL, the database object specified in the OBJECT_NAME parameter is used. COMPARISON_MODE: The default value CMP_COMPARE_MODE_OBJECT. Additional modes may be added in future releases. COLUMN_LIST: '*' to include all the columns in the database objects being compared. To compare a subset of columns in the database objects, specify a comma-separated list of the columns to check. Any columns that are not in the list are ignored during a comparison and convergence. SCAN_MODE: Either CMP_SCAN_MODE_FULL, CMP_SCAN_MODE_RANDOM, CMP_SCAN_MODE_CYCLIC, or CMP_SCAN_MODE_CUSTOM. If you specify CMP_SCAN_MODE_CUSTOM, make sure you specify an index by using the INDEX_SCHEMA_NAME and INDEX_NAME parameters. Specifying an index ensures that you can specify the correct min_value and max_value for the lead index column when you run the COMPARE or RECHECK function. Oracle Database 11g: Implement Streams I - 501

502 Oracle Database 11g: Implement Streams I - 502
Creating a Comparison (continued) Parameters for the CREATE_COMPARISON Procedure (continued) SCAN_PERCENT: The percentage of the database object to scan for comparison when the SCAN_MODE parameter is set to either CMP_SCAN_MODE_RANDOM or CMP_SCAN_MODE_CYCLIC. For these SCAN_MODE settings, a non-NULL value that is greater than 0 (zero) and less than 100 is required. If NULL and the SCAN_MODE parameter is set to CMP_SCAN_MODE_FULL, the entire database object is scanned for comparison. If NULL and the SCAN_MODE parameter is set to CMP_SCAN_MODE_CUSTOM, the portion of the database object scanned for comparison is specified when the COMPARE function is run. If non-NULL and the SCAN_MODE parameter is set to either CMP_SCAN_MODE_FULL or CMP_SCAN_MODE_CUSTOM, the SCAN_PERCENT parameter is ignored. Note: When the SCAN_PERCENT parameter is non-NULL, and the lead index column for the comparison does not distribute the rows in the database object evenly, the portion of the database object that is compared may be smaller or larger than the specified SCAN_PERCENT value. NULL_VALUE: The value to substitute for each NULL in the database objects being compared. Specify a value or use the CMP_NULL_VALUE_DEF constant. If a column being compared can contain NULLs, the value specified for this parameter must be different from any non-NULL value in the column. Otherwise, if the value specified for this parameter can appear in the column, some row differences may not be found. LOCAL_CONVERGE_TAG: The Oracle Streams tag to set in the session on the local database before performing any changes to converge the data in the database objects being compared. If the LOCAL_CONVERGE_TAG parameter is non-NULL in the CONVERGE procedure when comparison results for this comparison are converged, the setting in the CONVERGE procedure takes precedence. REMOTE_CONVERGE_TAG: The Oracle Streams tag to set in the session on the remote database before performing any changes to converge the data in the database objects being compared. If the REMOTE_CONVERGE_TAG parameter is non-NULL in the CONVERGE procedure when comparison results for this comparison are converged, the setting in the CONVERGE procedure takes precedence. MAX_NUM_BUCKETS: The maximum number of buckets to use. Specify a value or use the CMP_MAX_NUM_BUCKETS constant. Note: If an index column for a comparison is a VARCHAR2 or CHAR column, the number of buckets may exceed the value specified for the MAX_NUM_BUCKETS parameter. MIN_ROWS_IN_BUCKET: The minimum number of rows in each bucket. Specify a value or use the CMP_MIN_ROWS_IN_BUCKET constant. Oracle Database 11g: Implement Streams I - 502

503 Performing a Comparison
DECLARE consistent BOOLEAN; scan_info DBMS_COMPARISON.COMPARISON_TYPE; BEGIN consistent := DBMS_COMPARISON.COMPARE (comparison_name => 'compare_departments', scan_info => scan_info, perform_row_dif => true ); DBMS_OUTPUT.PUT_LINE('Scan ID:'||scan_info.scan_id); IF consistent=TRUE THEN DBMS_OUTPUT.PUT_LINE('No differences were found.'); ELSE DBMS_OUTPUT.PUT_LINE('Differences were found.'); END IF; END; Performing a Comparison The next step is to run the COMPARE function to compare the HR.DEPARTMENTS table in the two databases as shown. This example prints the scan ID for the comparison. You can use the scan ID to query data dictionary views for information about the comparison and when you are converging the database objects. The output will be similar to the following: Scan ID: 30 Differences were found. PL/SQL procedure successfully completed. The function also prints whether or not differences were found in the table in the two databases: If the function prints 'No differences were found.', the table is consistent in the two databases. If the function prints 'Differences were found.', the table has diverged in the two databases. Oracle Database 11g: Implement Streams I - 503

504 Viewing the Differences in Data
SELECT c.OWNER, c.COMPARISON_NAME, c.SCHEMA_NAME, c.OBJECT_NAME, s.CURRENT_DIF_COUNT FROM DBA_COMPARISON c, DBA_COMPARISON_SCAN s WHERE c.COMPARISON_NAME = s.COMPARISON_NAME AND s.SCAN_ID = 30; Viewing the Differences in Data Using the scan ID, query the DBA_COMPARISON_SCAN data dictionary view to show the number of differences found: COLUMN OWNER FORMAT A16 COLUMN COMPARISON_NAME FORMAT A20 COLUMN SCHEMA_NAME FORMAT A11 COLUMN OBJECT_NAME FORMAT A11 COLUMN CURRENT_DIF_COUNT HEADING 'DIFF' FORMAT SELECT c.OWNER,c.COMPARISON_NAME, c.SCHEMA_NAME, c.OBJECT_NAME, s.CURRENT_DIF_COUNT FROM DBA_COMPARISON c, DBA_COMPARISON_SCAN s WHERE c.COMPARISON_NAME = s.COMPARISON_NAME AND s.SCAN_ID = 30; Your output should be similar to the following: OWNER COMPARISON_NAME SCHEMA_NAME OBJECT_NAME DIFF SYSTEM COMPARE_DEPARTMENTS HR DEPARTMENTS 3 Oracle Database 11g: Implement Streams I - 504

505 Identifying the Rows That Differ
SELECT c.COLUMN_NAME, r.INDEX_VALUE, DECODE(r.LOCAL_ROWID,NULL, 'No','Yes') LOCAL_ROWID, DECODE(r.REMOTE_ROWID,NULL, 'No','Yes') REMOTE_ROWID FROM DBA_COMPARISON_COLUMNS c, DBA_COMPARISON_ROW_DIF r, DBA_COMPARISON_SCAN s WHERE c.COMPARISON_NAME = 'COMPARE_DEPARTMENTS' AND r.SCAN_ID = s.SCAN_ID AND s.PARENT_SCAN_ID = 30 AND r.STATUS = 'DIF' AND c.INDEX_COLUMN = 'Y' ORDER BY r.INDEX_VALUE; Identifying the Rows That Differ To see which rows were different in the database object being compared, run the following query: COLUMN COLUMN_NAME HEADING 'Index Column' FORMAT A15 COLUMN INDEX_VALUE HEADING 'Index Value' FORMAT A15 COLUMN LOCAL_ROWID HEADING 'Local Row Exists?' FORMAT A20 COLUMN REMOTE_ROWID HEADING 'Remote Row Exists?' FORMAT A20 SELECT c.COLUMN_NAME, r.INDEX_VALUE, DECODE(r.LOCAL_ROWID,NULL, 'No','Yes') LOCAL_ROWID, DECODE(r.REMOTE_ROWID,NULL, 'No','Yes') REMOTE_ROWID FROM DBA_COMPARISON_COLUMNS c, DBA_COMPARISON_ROW_DIF r, DBA_COMPARISON_SCAN s WHERE c.COMPARISON_NAME = 'COMPARE_DEPARTMENTS' AND r.SCAN_ID = s.SCAN_ID AND s.PARENT_SCAN_ID = 30 AND r.STATUS = 'DIF' AND c.INDEX_COLUMN = 'Y' ORDER BY r.INDEX_VALUE; Oracle Database 11g: Implement Streams I - 505

506 Oracle Database 11g: Implement Streams I - 506
Identifying the Rows That Differ (continued) In the example, the WHERE clause specifies the name of the comparison and the scan ID for the comparison. In this example, the name of the comparison is compare_departments and the scan ID is 30. The output looks similar to the following: Index Column Index Value Local Row Exists? Remote Row Exists? DEPARTMENT_ID Yes Yes DEPARTMENT_ID Yes No DEPARTMENT_ID No Yes This output shows the index column for the table being compared and the index value for each row that is different in the shared database object. In this example, the index column is the primary-key column for the HR.DEPARTMENTS table (department_id). The output also shows the type of difference for each row: If Local Row Exists? and Remote Row Exists? are both Yes for a row, the row exists in both instances of the database object, but the data in the row is different. If Local Row Exists? is Yes and Remote Row Exists? is No for a row, the row exists in the local database object, but not in the remote database object. If Local Row Exists? is No and Remote Row Exists? is Yes for a row, the row exists in the remote database object, but not in the local database object. Oracle Database 11g: Implement Streams I - 506

507 Using the CONVERGE Procedure
PROCEDURE CONVERGE Argument Name Type In/Out Default? COMPARISON_NAME VARCHAR2 IN SCAN_ID NUMBER IN SCAN_INFO RECORD OUT SCAN_ID NUMBER OUT LOC_ROWS_MERGED NUMBER OUT RMT_ROWS_MERGED NUMBER OUT LOC_ROWS_DELETED NUMBER OUT RMT_ROWS_DELETED NUMBER OUT CONVERGE_OPTIONS VARCHAR2 IN DEFAULT PERFORM_COMMIT BOOLEAN IN DEFAULT LOCAL_CONVERGE_TAG RAW IN DEFAULT REMOTE_CONVERGE_TAG RAW IN DEFAULT Using the CONVERGE Procedure If the data in the compared database tables differs, you can use the CONVERGE procedure in the DBMS_COMPARISON package to converge the two instances of the database object. After the CONVERGE procedure runs successfully, the shared database object is consistent in the two databases. The CONVERGE procedure synchronizes the portion of the database object compared by the specified scan and returns information about the changes it made. Some scans may compare a subset of the database object. In this example, the specified scan compared the entire table. So, the entire table is synchronized, assuming that no new differences appeared after the comparison scan completed. The local table wins in this example because the converge_options parameter is set to DBMS_COMPARISON.CMP_CONVERGE_LOCAL_WINS in the procedure. That is, for the rows that are different in the two databases, the rows in the local database replace the corresponding rows in the remote database. If some rows exist in the remote database, but not at the local database, the extra rows in the remote database are deleted. Instead, if you want the remote database to win, set the converge_options parameter to DBMS_COMPARISON.CMP_CONVERGE_REMOTE_WINS in the procedure. Oracle Database 11g: Implement Streams I - 507

508 Oracle Database 11g: Implement Streams I - 508
Using the CONVERGE Procedure (continued) In addition, if you run the CONVERGE procedure on a shared database object that is part of a Streams replication environment, you may not want the changes made by the procedure to be replicated to other databases. In this case, you can set the following parameters in the CONVERGE procedure to values that prevent the changes from being replicated: local_converge_tag remote_converge_tag When one of these parameters is set to a non-NULL value, a Streams tag is set in the session that makes the changes during conversion. The local_converge_tag parameter sets the tag in the session in the local database, whereas the remote_converge_tag parameter sets the tag in the session in the remote database. If you do not want the changes made by the CONVERGE procedure to be replicated, set these parameters to a value that prevents Streams capture processes and synchronous captures from capturing the changes. Oracle Database 11g: Implement Streams I - 508

509 Converging Database Objects
DECLARE scan_info DBMS_COMPARISON.COMPARISON_TYPE; BEGIN DBMS_COMPARISON.CONVERGE( comparison_name => 'compare_departments', scan_id => 30, scan_info => scan_info, converge_options => DBMS_COMPARISON.CMP_CONVERGE_LOCAL_WINS); DBMS_OUTPUT.PUT_LINE ('Local Rows Merged: '||scan_info.loc_rows_merged); DBMS_OUTPUT.PUT_LINE ('Remote Rows Merged:'||scan_info.rmt_rows_merged); DBMS_OUTPUT.PUT_LINE ('Local Rows Deleted:'||scan_info.loc_rows_deleted); DBMS_OUTPUT.PUT_LINE ('Remote Rows Deleted:'||scan_info.rmt_rows_deleted); END; Converging Database Objects To run the CONVERGE procedure, specify the following information: The name of an existing comparison created using the CREATE_COMPARISON procedure in the DBMS_COMPARISON package The scan ID of the comparison that you want to converge The scan ID contains information about the differences that will be converged. In this example, the name of the comparison is compare_departments and the scan ID is 30. Also, when you run the CONVERGE procedure, you must specify which database “wins” when the shared database object is converged. If you specify that the local database wins, the data in the database object in the local database replaces the data in the database object in the remote database when the data is different. If you specify that the remote database wins, the data in the database object in the remote database replaces the data in the database object in the local database when the data is different. In this example, the amer local database wins. Oracle Database 11g: Implement Streams I - 509

510 Oracle Database 11g: Implement Streams I - 510
Converging Database Objects (continued) The example shows you how to run the CONVERGE procedure to converge the HR.DEPARTMENTS table in two databases: SET SERVEROUTPUT ON DECLARE scan_info DBMS_COMPARISON.COMPARISON_TYPE; BEGIN DBMS_COMPARISON.CONVERGE( comparison_name => 'compare_departments', scan_id => 30, scan_info => scan_info, converge_options => DBMS_COMPARISON.CMP_CONVERGE_LOCAL_WINS); DBMS_OUTPUT.PUT_LINE('Local Rows Merged: '||scan_info.loc_rows_merged); DBMS_OUTPUT.PUT_LINE('Remote Rows Merged: '||scan_info.rmt_rows_merged); DBMS_OUTPUT.PUT_LINE('Local Rows Deleted: '||scan_info.loc_rows_deleted); DBMS_OUTPUT.PUT_LINE('Remote Rows Deleted: '||scan_info.rmt_rows_deleted); END; / Local Rows Merged: 0 Remote Rows Merged: 2 Local Rows Deleted: 0 Remote Rows Deleted: 1 PL/SQL procedure successfully completed. Oracle Database 11g: Implement Streams I - 510

511 Converging a Shared Database Object with a Session Tag Set
DECLARE scan_info DBMS_COMPARISON.COMPARISON_TYPE; BEGIN DBMS_COMPARISON.CONVERGE( comparison_name => 'compare_orders', scan_id => 16, -- Substitute your scan ID scan_info => scan_info, converge_options => DBMS_COMPARISON.CMP_CONVERGE_LOCAL_WINS, remote_converge_tag => '11'); DBMS_OUTPUT.PUT_LINE('Local Rows Merged: '||scan_info.loc_rows_merged); END; Converging a Shared Database Object with a Session Tag Set If the shared database object being converged is part of a Streams replication environment, you can set a session tag so that changes made by the CONVERGE procedure are not replicated. Typically, changes made by the CONVERGE procedure must not be replicated to avoid change cycling, which means sending a change back to the database where it originated. In a Streams replication environment, session tags can be used to ensure that changes made by the CONVERGE procedure are not captured by Streams capture processes or synchronous captures and, therefore, not replicated. To set a session tag in the session running the CONVERGE procedure, use the following procedure parameters: The local_converge_tag parameter sets a session tag in the local database. Set this parameter to a value that prevents replication when the remote database “wins” and the CONVERGE procedure makes changes to the local database. The remote_converge_tag parameter sets a session tag in the remote database. Set this parameter to a value that prevents replication when the local database “wins” and the CONVERGE procedure makes changes to the remote database. Oracle Database 11g: Implement Streams I - 511

512 Oracle Database 11g: Implement Streams I - 512
Converging a Shared Database Object with a Session Tag Set (continued) The appropriate value for a session tag depends on the Streams replication environment. Set the tag to a value that prevents capture processes and synchronous captures from capturing changes made by the session. The example specifies that the local database “wins” the converge operation by setting the converge_options parameter to DBMS_COMPARISON.CMP_CONVERGE_LOCAL_WINS. Therefore, the example sets the remote_converge_tag parameter to 11. The session tag can be set to any non-NULL value that prevents the changes made by the CONVERGE procedure to the remote database from being replicated. DECLARE scan_info DBMS_COMPARISON.COMPARISON_TYPE; BEGIN DBMS_COMPARISON.CONVERGE( comparison_name => 'compare_orders', scan_id => 16, -- Substitute the scan ID from your scan. scan_info => scan_info, converge_options => DBMS_COMPARISON.CMP_CONVERGE_LOCAL_WINS, remote_converge_tag => '11'); DBMS_OUTPUT.PUT_LINE('Local Rows Merged: '||scan_info.loc_rows_merged); DBMS_OUTPUT.PUT_LINE('Remote Rows Merged: '||scan_info.rmt_rows_merged); DBMS_OUTPUT.PUT_LINE('Local Rows Deleted: '||scan_info.loc_rows_deleted); DBMS_OUTPUT.PUT_LINE('Remote Rows Deleted: '||scan_info.rmt_rows_deleted); END; / Your output should be similar to the following: Local Rows Merged: 0 Remote Rows Merged: 5 Local Rows Deleted: 0 Remote Rows Deleted: 1 PL/SQL procedure successfully completed. You can determine scan_id as follows: SELECT ROOT_SCAN_ID FROM DBA_COMPARISON_SCAN WHERE COMPARISON_NAME = 'COMPARE_ORDERS'; Oracle Database 11g: Implement Streams I - 512

513 Comparing a Subset of Columns
BEGIN DBMS_COMPARISON.CREATE_COMPARISON( comparison_name => 'compare_subset_columns', schema_name => 'oe', object_name => 'orders', dblink_name => 'euro', column_list => 'order_id,order_date,customer_id'); END; / Database B Database A Scan ID Comparing a Subset of Columns The column_list parameter in the CREATE_COMPARISON procedure enables you to compare a subset of the columns in a database object. The following are reasons to compare a subset of columns: A database object contains extra columns that do not exist in the database object to which it is being compared. In this case, the column_list parameter must contain only the columns that exist in both database objects. You want to focus a comparison on a specific set of columns. For example, if a table contains hundreds of columns, you may want to list specific columns in the column_list parameter to make the comparison more efficient. Differences are expected in some columns. In this case, exclude the columns in which differences are expected from the column_list parameter. The columns in the column list must meet the following requirements: The column list must meet the index column requirements for the DBMS_COMPARISON package. If you plan to use the CONVERGE procedure to make changes to a database object based on comparison results, you must include in the column list any column in this database object that has a NOT NULL constraint but no default value. Oracle Database 11g: Implement Streams I - 513

514 Oracle Database 11g: Implement Streams I - 514
Comparing a Subset of Columns (continued) The example assumes that the OE.ORDERS table has differences in the two database. When you run the COMPARE function to compare the OE.ORDERS table at the two databases: SET SERVEROUTPUT ON DECLARE consistent BOOLEAN; scan_info DBMS_COMPARISON.COMPARISON_TYPE; BEGIN consistent := DBMS_COMPARISON.COMPARE( comparison_name => 'compare_subset_columns', scan_info => scan_info, perform_row_dif => TRUE ); DBMS_OUTPUT.PUT_LINE('Scan ID: '||scan_info.scan_id); IF consistent=TRUE THEN DBMS_OUTPUT.PUT_LINE('No differences were found.'); ELSE DBMS_OUTPUT.PUT_LINE('Differences were found.'); END IF; END; / Notice that the perform_row_dif parameter is set to TRUE in the COMPARE function. This setting instructs the COMPARE function to identify each individual row difference in the tables. When the perform_row_dif parameter is set to FALSE, the COMPARE function records whether or not there are differences in the tables, but does not record each individual row difference. Your output should be similar to the following: Scan ID: 1 Differences were found. PL/SQL procedure successfully completed. Oracle Database 11g: Implement Streams I - 514

515 Comparing Without Identifying Rows
BEGIN DBMS_COMPARISON.CREATE_COMPARISON( comparison_name => 'compare_orders', schema_name => 'oe', object_name => 'orders', dblink_name => 'euro'); END; DECLARE consistent BOOLEAN; scan_info DBMS_COMPARISON.COMPARISON_TYPE; BEGIN consistent := DBMS_COMPARISON.COMPARE( comparison_name => 'compare_orders', scan_info => scan_info, perform_row_dif => FALSE); DBMS_OUTPUT.PUT_LINE('Scan ID: '||scan_info.scan_id); IF consistent=TRUE THEN DBMS_OUTPUT.PUT_LINE(' No Differences.'); ELSE DBMS_OUTPUT.PUT_LINE('Differences exist.'); END IF; END; Comparing Without Identifying Rows This example compares the entire OE.ORDERS table at Euro and databases without identifying individual row differences: When you run the COMPARE procedure for an existing comparison, the perform_row_dif parameter controls whether the COMPARE procedure identifies each individual row difference in the database objects: When the perform_row_dif parameter is set to TRUE, the COMPARE procedure records whether or not there are differences in the database objects, and it records each individual row difference. Set this parameter to TRUE when you must identify each difference in the database objects. When the perform_row_dif parameter is set to FALSE, the COMPARE procedure records whether or not there are differences in the database objects, but does not record each individual row difference. Set this parameter to FALSE when you want to know whether there are differences in the database objects, but you do not need to identify each individual difference. Setting this parameter to FALSE is the most efficient way to perform a comparison. Your output should be similar to the following: Scan ID: 4 Differences exist. Oracle Database 11g: Implement Streams I - 515

516 Comparing a Random Portion
BEGIN DBMS_COMPARISON.CREATE_COMPARISON( comparison_name => 'compare_random', schema_name => 'oe', object_name => 'orders', dblink_name => 'euro', scan_mode => DBMS_COMPARISON.CMP_SCAN_MODE_RANDOM, scan_percent => 50); END; / Scan ID Comparing a Random Portion The scan_percent and scan_mode parameters in the CREATE_COMPARISON procedure enable you to compare a random portion of a shared database object instead of the entire database object. Typically, you use this option under the following conditions: You are comparing a relatively large shared database object, and you want to determine whether there may be differences without devoting the resources and time to comparing the entire database object. You do not intend to use subsequent comparisons to compare different portions of the database object. In the example in the slide, you are randomly comparing the OE.ORDERS table in both databases. Oracle Database 11g: Implement Streams I - 516

517 Oracle Database 11g: Implement Streams I - 517
Comparing a Random Portion (continued) Run the COMPARE function to compare the OE.ORDERS table at the two databases: SET SERVEROUTPUT ON DECLARE consistent BOOLEAN; scan_info DBMS_COMPARISON.COMPARISON_TYPE; BEGIN consistent := DBMS_COMPARISON.COMPARE( comparison_name => 'compare_random', scan_info => scan_info, perform_row_dif => TRUE); DBMS_OUTPUT.PUT_LINE('Scan ID: '||scan_info.scan_id); IF consistent=TRUE THEN DBMS_OUTPUT.PUT_LINE('No differences were found.'); ELSE DBMS_OUTPUT.PUT_LINE('Differences were found.'); END IF; END; / Notice that the perform_row_dif parameter is set to TRUE in the COMPARE function. This setting instructs the COMPARE function to identify each individual row difference in the tables. When the perform_row_dif parameter is set to FALSE, the COMPARE function records whether or not there are differences in the tables, but does not record each individual row difference. Your output should be similar to the following: Scan ID: 7 Differences were found. PL/SQL procedure successfully completed. This comparison scan may or may not find differences, depending on the portion of the table that is compared. Oracle Database 11g: Implement Streams I - 517

518 Rechecking the Results for a Comparison
DECLARE consistent BOOLEAN; BEGIN consistent := DBMS_COMPARISON.RECHECK( comparison_name=>'compare_orders', scan_id => 4); IF consistent=TRUE THEN DBMS_OUTPUT.PUT_LINE('No differences'); ELSE DBMS_OUTPUT.PUT_LINE('Differences found'); END IF; END; / Rechecking the Results for a Comparison You can recheck a previous comparison scan by using the RECHECK function in the DBMS_COMPARISON package. The RECHECK function checks the current data in the database objects for differences that were recorded in the specified comparison scan. For example, to recheck the results for scan ID 4 of a comparison named compare_orders, log in to SQL*Plus as the owner of the comparison, and run the procedure shown. Your output should be similar to the following: Differences found PL/SQL procedure successfully completed. The function returns TRUE if differences were found, or FALSE if no differences were found. Oracle Database 11g: Implement Streams I - 518

519 Viewing Comparison Results
The following views display information about the results of comparisons: DBA_COMPARISON DBA_COMPARISON_COLUMNS DBA_COMPARISON_SCAN DBA_COMPARISON_SCAN_SUMMARY DBA_COMPARISON_SCAN_VALUES DBA_COMPARISON_ROW_DIF Viewing Comparison Results The listed data dictionary views contain information about comparisons created with the DBMS_COMPARISON package. The DBA_COMPARISON or USER_COMPARISON data dictionary view contains information about the comparisons in the local database. The query in this section displays the following information about each comparison: The owner of the comparison The name of the comparison The schema that contains the database object compared by the comparison The name of the database object compared by the comparison The data type of the database object compared by the comparison The scan mode used by the comparison. The following scan modes are possible: FULL indicates that the entire database object is compared. RANDOM indicates that a random portion of the database object is compared. CYCLIC indicates that a portion of the database object is compared during a single comparison. When the database object is compared again, another portion of the database object is compared, starting where the last comparison ended. CUSTOM indicates that the COMPARE function specifies the range to compare in the database object. The name of the database link used to connect with the remote database Oracle Database 11g: Implement Streams I - 519

520 Oracle Database 11g: Implement Streams I - 520
Viewing Comparison Results (continued) To view this information, run the following query: COLUMN OWNER HEADING 'Comparison|Owner' FORMAT A10 COLUMN COMPARISON_NAME HEADING 'Comparison|Name' FORMAT A22 COLUMN SCHEMA_NAME HEADING 'Schema|Name' FORMAT A8 COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A8 COLUMN OBJECT_TYPE HEADING 'Object|Type' FORMAT A8 COLUMN SCAN_MODE HEADING 'Scan|Mode' FORMAT A6 COLUMN DBLINK_NAME HEADING 'Database|Link' FORMAT A10 SELECT OWNER, COMPARISON_NAME, SCHEMA_NAME, OBJECT_NAME, OBJECT_TYPE, SCAN_MODE, DBLINK_NAME FROM DBA_COMPARISON; Your output should be similar to the following: Comparison Comparison Schema Object Object Scan Link Name Name Owner Name Type Mode Database ADMIN COMPARE_SUBSET OE ORDERS TABLE FULL euro ADMIN COMPARE_ORDERS OE ORDERS TABLE FULL euro ADMIN COMPARE_RANDOM OE ORDERS TABLE RANDOM euro ADMIN COMPARE_CYCLIC OE ORDERS TABLE CYCLIC euro ADMIN COMPARE_CUSTOM OE ORDERS TABLE CUSTOM euro Oracle Database 11g: Implement Streams I - 520

521 Purging Comparison Results
BEGIN DBMS_COMPARISON.PURGE_COMPARISON( comparison_name => 'compare_orders', scan_id => NULL, purge_time => NULL); END; / Purging Comparison Results You can purge the comparison results of one or more comparisons when they are no longer needed by using the PURGE_COMPARISON procedure in the DBMS_COMPARISON package. You can either purge all the comparison results for a comparison or a subset of the comparison results. When comparison results are purged, they can no longer be used to recheck the comparison or converge divergent data. Also, information about the comparison results is removed from data dictionary views. To purge all the comparison results for a comparison, specify the comparison name in the comparison_name parameter, and specify the default value of NULL for the scan_id and purge_time parameters. The examples show you how to purge all the comparison results for a comparison named compare_orders, log in to SQL*Plus as the owner of the comparison, and run the following procedure: Oracle Database 11g: Implement Streams I - 521

522 Oracle Database 11g: Implement Streams I - 522
Purging Comparison Results (continued) To purge the comparison results for a specific scan of a comparison, specify the comparison name in the comparison_name parameter, and specify the scan ID in the scan_id parameter. The specified scan ID must identify a root scan. The root scan in comparison results is the highest-level parent scan. The root scan does not have a parent. You can identify root scan IDs by querying the ROOT_SCAN_ID column of the DBA_COMPARISON_SCAN_SUMMARY data dictionary view. When you run the PURGE_COMPARISON procedure and specify a root scan, the root scan is purged. In addition, all direct and indirect child scans of the specified root scan are purged. Results for other scans are not purged. For example, to purge the comparison results for scan ID 4 of a comparison named compare_orders, log in to SQL*Plus as the owner of the comparison, and run the following procedure: BEGIN DBMS_COMPARISON.PURGE_COMPARISON( comparison_name => 'compare_orders', scan_id => 4); -- Substitute the scan ID from your scan. END; / Oracle Database 11g: Implement Streams I - 522

523 Oracle Database 11g: Implement Streams I - 523
Dropping a Comparison BEGIN DBMS_COMPARISON.DROP_COMPARISON( 'compare_subset_columns'); END; / Scan ID Dropping a Comparison To drop a comparison and all its comparison results, use the DROP_COMPARISON procedure in the DBMS_COMPARISON package. For example, to drop a comparison named compare_subset_columns, log in to SQL*Plus as the owner of the comparison, and run the procedure given in the slide. Oracle Database 11g: Implement Streams I - 523

524 Oracle Database 11g: Implement Streams I - 524
Summary In this lesson, you should have learned how to perform data convergence and comparisons. Oracle Database 11g: Implement Streams I - 524

525 Practice 17 Overview: Comparing Data
This practice covers the following topics: Creating a comparison Performing comparisons Viewing the differences Identifying the rows that differ Converging the database objects Oracle Database 11g: Implement Streams I - 525

526 Practice 17 Result: Comparing Data
EURO database AMER database OE.ORDERS Scan ID INSERT UPDATE DELETE Note: This practice is independent of any Streams configuration. Oracle Database 11g: Implement Streams I - 526

527 Extending the Streams Environment

528 Oracle Database 11g: Implement Streams 18 - 528
Objectives After completing this lesson, you should be able to extend an existing Streams environment by: Adding a new database object to both single-source and multiple-source configurations Adding a new destination site or source site to both single- source and multiple-source configurations Creating a new Streams site by using RMAN Oracle Database 11g: Implement Streams

529 Extending Streams: Adding New Shared Objects
Source: Existing schema or global DDL rules No exclusions Table creation and supplemental logging Destination: Automatic creation and instantiation To do: Check supplemental logging and apply privileges CP01 AP01 Global DDL Propagation Extending Streams: Adding New Shared Objects You can add database objects to a functioning single-source environment by adding the necessary rules to the rule sets of the appropriate capture processes, propagation jobs, and apply processes. Before modifying rules in a running Streams environment, make sure that any propagation jobs or apply processes will receive the new messages. For example, suppose that you want to add a table to a Streams environment that already captures, propagates, and applies changes to other tables. If you create a new table in a schema on a capture site that has positive schema or global DDL capture rules defined, then as long as the change is not filtered out by any negative rules, this table will be created and instantiated automatically at all destinations sites that have instantiated the schema or database. Any DML operations performed on the new table will be captured and applied at the destination sites, provided that the DML changes conform to the capture and apply DML rules. You may need to configure supplemental logging for the table at the source site and grant privileges on the table to the apply user at the destination site. If you do not have positive schema or global DDL capture rules defined and you want to add a table to a functioning Streams environment, you must perform a series of steps to add the database object to the data stream. HR.SALES table SITE1 source SITE2 destination Oracle Database 11g: Implement Streams

530 Adding a Table to a Single-Source System
1. Stop the data stream. 6. Prepare for instantiation. 2. Add rules to apply. 7. Instantiate at each destination. 3. Add rules to propagation. 8. Grant privileges on table to apply user. 4. Configure supplemental logging. Adding a Table to a Single-Source System: Example To avoid losing messages, you must complete the configuration in the sequence that is demonstrated in the slide. In this example, you have two databases: the SITE1 source and the SITE2 destination. A Streams environment is already configured and running. Capture and propagation are configured on SITE1, and apply is configured on SITE2. There is no capture or propagation configured on SITE2. You want to add the HR.REGIONS table into the stream with SITE1 as the source database and have changes for this table propagated to and applied at SITE2. The following steps implement this change: 1. Stop the capture process or disable the propagation to the target database, or stop the apply processes that capture, propagate, or apply changes to the added shared object. Note that only one of the processes must be stopped: capture, propagation, or apply. 2. Add the relevant rules to the rule sets for the apply processes that apply changes to the newly added objects on the destination database. Configure any necessary apply handlers. 3. Add the relevant rules to the rule sets for the propagation jobs on the source database for the newly added objects. 4. Configure any supplemental logging for the HR.REGIONS table at the SITE1 database, if required. 5. Add rules to capture. 9. Restart stopped processes. Oracle Database 11g: Implement Streams

531 Oracle Database 11g: Implement Streams 18 - 531
Adding a Table to a Single-Source System: Example (continued) 5. Add capture rules on the source site. 6. When you use the DBMS_STREAMS_ADM package to add the capture rules, it automatically runs the appropriate PREPARE_*_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package for the specified table, specified schema, or entire database. If you use the DBMS_RULE_ADM package to add the rules, you must run the appropriate procedure to prepare for instantiation manually. Also, if you modify the rule set for either apply or propagation without similarly modifying the capture process rule set, you must run the appropriate procedure to prepare for instantiation manually. 7. At the destination database, either instantiate (or set the instantiation SCN for) each database object that you are adding to the Streams environment. If the table does not exist at a destination database, you can either create the object first or create the object as part of the instantiation process. 8. Grant privileges on the new shared table to the apply user on each destination site. If a user-created apply handler is being used, grant privileges on the procedure to the apply user as well. 9. When all the previous steps are completed, start each process that you stopped and enable each propagation job that you disabled in step 1. Oracle Database 11g: Implement Streams

532 Adding a New Destination to the Streams Setup
CP01 SITE1 source SITE2 destination AP01 SITE3 destination AP01 Propagation job Propagation job Adding a New Destination to the Streams Setup You can add a destination database to an existing single-source environment by creating one or more new apply processes at the new destination database and, if necessary, configuring one or more propagation jobs to propagate changes to the new destination database. You may also need to add rules to existing propagation jobs in the stream that propagates to the new destination database. Before creating or altering propagation rules in a running Streams environment, make sure that any propagation jobs or apply processes that receive messages as a result of the new or altered rules are configured to handle these messages. Otherwise, messages may be lost. Oracle Database 11g: Implement Streams

533 Adding a New Destination to a Single-Source System
5. At the source, prepare all shared objects for instantiation. 1. Configure the new destination database. 2. Create a Streams administrator and queue. 6. Instantiate all objects at the new destination site. 3. Create and configure apply at the destination (but do not start it). 7. Grant privileges on shared objects to apply user. 4. Configure propagation from the source site to the destination site. 8. Start new apply processes. Adding a New Destination to a Single-Source System Adding a new destination database to an existing single-source Streams environment follows the same steps that you perform when configuring a database for a new Streams environment. Suppose that the new destination site that is being added will receive changes from shared objects that are already part of the stream. Capture for these objects is already configured. 1. Complete the necessary tasks to prepare the new database for Streams: Set the database parameters that are relevant to Streams. Relocate the LogMiner tables to a new tablespace, if needed. Configure network connectivity and database links as needed. 2. Configure a Streams administrator and Streams queue at the new destination site. Grant all the necessary privileges to the Streams administrator. 3. Create one or more apply processes at the new destination site to apply the changes from the source site. Specify the rules for each apply process. Do not start any apply process at the new destination. Keeping the apply processes stopped prevents changes that are made at the source databases from being applied before the instantiation of the new database is completed, and thus avoids unsynchronized data and apply errors. Oracle Database 11g: Implement Streams

534 Oracle Database 11g: Implement Streams 18 - 534
Adding a New Destination to a Single-Source System (continued) 4. Configure propagation from the source site to the new destination site. Make sure that each propagation uses a rule set that is appropriate for propagating changes. Create new propagations as needed. 5. At the source database, prepare for instantiation each database object for which changes will be applied by an apply process at the new destination database. Because you are not creating new capture rules by using procedures in the DBMS_STREAMS_ADM package, the PREPARE_TABLE_INSTANTIATION, PREPARE_SCHEMA_INSTANTIATION, or PREPARE_GLOBAL_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package must be run at the source site for the shared tables, schemas, or database to which changes are applied at the new destination site. 6. At the new destination database, either instantiate or set the instantiation SCNs for each database object for which changes will be applied by an apply process. If the database objects do not exist at the new destination database, you can either create the objects first and then instantiate them, or create the database objects as part of the instantiation process. 7. Make sure that the apply user has privileges on the shared objects that have been instantiated. 8. Start the apply processes at the new destination database. Oracle Database 11g: Implement Streams

535 Adding Objects to a Multiple-Source System
SITE1 CP01 AP02 AP03 HR.SALES Propagation1 Propagation3 SITE2 CP02 AP01 AP03 SITE3 CP03 AP01 AP02 Propagation2 Adding Objects to a Multiple-Source System Before modifying rules in a running Streams environment, make sure that any propagation jobs or apply processes can receive new messages. To avoid the loss of messages, propagation jobs or apply processes must exist, and each one must be associated with a rule set that handles all messages appropriately. You can add existing database objects to an existing multiple-source environment by adding the necessary rules to the appropriate capture processes, propagation jobs, and apply processes. For example, suppose that you want to add a new table to a running Streams environment. Assume that multiple capture processes in the environment will capture changes to the table you want to add, and that apply processes for each source database already exist at each destination site. In this case, you must add one or more table-level rules to the positive rule sets for: Each capture process that will capture changes to the table Each propagation that will propagate changes to the table Each apply process that will apply changes to the table Oracle Database 11g: Implement Streams

536 Adding a Table to a Multiple-Source System
1. Configure supplemental logging at all populated databases. 6. Prepare for Instantiation. 2. Stop the data stream. 7. Instantiate at each destination. 3. Add rules to apply for each destination site. 8. Grant privileges on the table to apply user. 4. Add rules to propagation at each staging site. Adding a Table to a Multiple-Source System To avoid losing messages, you must complete the configuration in the sequence that is demonstrated in the slide. For example, suppose that you want to add the HR.REGIONS table into a stream with two source databases, both of which have DDL and DML changes for this table propagated to and applied at the other site. The HR.REGIONS table already exists at both sites. Here are the steps that implement this change: 1. At each populated database, specify any necessary supplemental logging for the objects that are being added to the environment. 2. Either stop all the capture processes that will capture changes to the added objects, disable all the propagation jobs that will propagate changes to the added objects, or stop all the apply process that will apply changes to the added objects. 3. Add the relevant rules to the rule sets for the propagation jobs that will propagate changes for the added objects at each source and intermediate site. 4. Add the relevant rules to the rule sets for the apply processes that will apply changes to the added objects at each destination site. 5. Add the relevant rules to the rule set that is used by each capture process that will capture changes to the added objects at each source site. 5. Add rules to capture for each source site. 9. Restart stopped processes. Oracle Database 11g: Implement Streams

537 Oracle Database 11g: Implement Streams 18 - 537
Adding a Table to a Multiple-Source System: Example (continued) 6. When you use the DBMS_STREAMS_ADM package to add the capture rules, it automatically runs the PREPARE_TABLE_INSTANTIATION, PREPARE_SCHEMA_INSTANTIATION, or PREPARE_GLOBAL_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package for the specified table, specified schema, or the entire database, respectively. If you use the DBMS_RULE_ADM package to add the rules, you must run the appropriate procedure to prepare for instantiation manually. Also, if you modify the rule set for either apply or propagation without similarly modifying the capture process rule set, you must run the appropriate procedure to prepare for instantiation manually. 7. Because the table already exists at each site, you can use the procedure for instantiating a populated database: At each destination database, instantiate (or set the instantiation SCN for) each database object that you are adding to the Streams environment. 8. Grant privileges on the shared objects to the apply user at each destination site. 9. When all the previous steps have been completed, start each process that you stopped and enable each propagation job that you disabled in step 2. Oracle Database 11g: Implement Streams

538 Adding a New Database to a Multiple-Source Streams Environment
SITE2 CP02 AP01 AP03 SITE3 CP03 AP02 SITE1 CP01 Propagation2 Propagation4 Propagation3 Propagation1 Adding a New Database to a Multiple-Source Streams Environment To explain the process of creating and altering the configuration of a multiple-source system, the following terms are used: Populated database: A database that already contains the database objects shared between sites in a multiple-source environment. You must have at least one populated database to add a new database to the Streams environment. Export database: A populated database on which you perform an export of the shared database objects. You can use this export to instantiate the shared database objects at the import databases. You may not have an export database if all the databases in the environment are populated databases. Import database: A database being added to the multiple-source environment that does not contain the shared database objects. You instantiate the shared database objects at an import database by using the export dump file from the export database. You may not have any import databases if all the databases in the environment are populated databases. Oracle Database 11g: Implement Streams

539 Adding a New Database to a Multiple-Source System
1. Configure the new source database. 2. Create a Streams administrator and queue. Is the new database a destination site? Yes 3. Create and configure apply for each source site. 4. Configure propagation from the source sites to the new database. No Adding a New Database to a Multiple-Source System Adding a new capture database to an existing single-source Streams environment means creating a new multiple-source environment. Follow the same steps as when configuring a capture database for a new Streams environment. 1. Perform the necessary tasks to prepare the new database for Streams: Configure database parameters and archive logging, configure network connectivity, and create database links. 2. Configure a Streams administrator and queue at each new destination site. If the new database is to be a destination site: 3. Create one or more apply processes at the new database to apply the changes from its source databases. Make sure each apply process uses a rule set that is appropriate. Do not start any apply process at the new database. Keeping the apply processes stopped prevents changes made at the source databases from being applied before the instantiation of the new database is completed, which would otherwise lead to incorrect data and errors. 4. Configure propagation jobs at the databases that will be source databases of the new database to send changes to the new database. Make sure that each propagation job uses a rule set that is appropriate for propagating changes. Oracle Database 11g: Implement Streams

540 Adding a New Database to a Multiple-Source System
Is the new database a source site? 5. Create and configure apply at all destinations of the new database. Yes 6. Configure propagation to all destinations. No No Do objects exist at the new source? Yes 7. Configure supplemental logging for shared objects at the new database. Adding a New Database to a Multiple-Source System (continued) If the new database will act as a source site, perform the following steps: 5. At each database that will be a destination site for the changes captured at the new database, create one or more apply processes to apply changes from the new database. Make sure that each apply process uses rule sets that are appropriate for applying changes. Do not start any of these new apply processes. 6. Configure propagation jobs at the new database to send changes from the new source site to each of its destination databases. Make sure that each propagation job uses rule sets that are appropriate for propagating changes. 7. If the shared objects already exist at the new source database, specify any necessary supplemental logging for the shared objects at the new database. Oracle Database 11g: Implement Streams

541 Adding a New Database to a Multiple-Source System
8. Prepare for instantiation at all source sites of the new destination database. 9. Create the capture process at the new database. Is the new database a source site? 10. Start the capture process at the new database. Yes No 11. Do final configuration of a populated database. Do objects exist at the new site? Yes 12. Do final configuration of an import database. Adding a New Database to a Multiple-Source System (continued) 8. At each source database for the new database, prepare for instantiation each database object for which changes will be applied by an apply process at the new database. Use one of the PREPARE_*_INSTANTIATION procedures in the DBMS_CAPTURE_ADM package to prepare tables, schemas, or the entire database. If the new database will be a source database: 9. Create one or more capture processes to capture the relevant changes. If you use DBMS_STREAMS_ADM to configure capture, this also prepares the objects for instantiation. If you use DBMS_CAPTURE_ADM, you must prepare the shared objects for instantiation after you finish creating the capture process. 10. Start any capture processes that you created in step 9. After completing these steps, perform the following tasks by using the appropriate method: 11. If the objects that are to be shared already exist at the new database, proceed to the “Final Configuration for a Populated Database” topic in this lesson. 12. If the objects that are to be shared do not already exist at the new database, proceed to the “Final Configuration for an Import Database” topic in this lesson. No Oracle Database 11g: Implement Streams

542 Final Configuration for a Populated Database
a. Set the instantiate SCN for the tables in the new database for each source site in the original configuration. Is the new database a source site? No Yes b. Instantiate tables at all destination sites of the new database. e. Start the new apply processes at the destination sites of the new database. c. Configure conflict resolution. END Final Configuration for a Populated Database Complete the following steps if the objects that are to be shared with the new database already exist at the new database: a. If the new database is a destination site, for each source database of the new database, set the instantiation SCNs at the new database. These instantiation SCNs must be set, and only the changes committed at a source database after the corresponding SCN, will be applied to this new destination database. You can export and import to instantiate the tables, or you can set the instantiation SCN manually. b. If the new database is a source database, set the instantiation SCNs at each destination database of the new database. These instantiation SCNs must be set, and only the changes made at the new source database (that are committed after the corresponding SCN) will be applied at a destination database. c. If data conflicts are possible, configure conflict resolution at each site where an apply process was created. d. If any apply processes were created at the new database, start them now. Make sure that the apply user for each apply process has the proper privileges. e. If the new database is also a source site, start the apply processes at all destination databases of the new source database. d. Start all apply processes at the new database. Oracle Database 11g: Implement Streams

543 Final Configuration for an Import Database
a. Set GLOBAL and SCHEMA instantiation SCNs at all destinations of the new source site. e. Set instantiation SCNs for tables at the new database for all source databases except the export database. f. Configure conflict resolution. b. Choose a populated database to be the export database. g. Start all apply processes at the new database. c. Export shared objects from the export database and import into the new source site. h. Start new apply processes at the destination sites of the new database. d. Configure supplemental logging for shared tables at the new source. Final Configuration for an Import Database a. If the new database is a source database and positive DDL capture rules are configured at the schema or global level, you must set the instantiation SCNs at all destination databases in the environment for the new database; the destination databases can be populated or import databases. Run the SET_GLOBAL_INSTANTIATION_SCN or SET_SCHEMA_INSTANTIATION_SCN procedures in the DBMS_APPLY_ADM package at each destination site, as needed. Because you are running these procedures before any tables are instantiated at the import databases, and because the local capture processes are configured already for these import databases, you do not need to set the instantiation SCN for each table that is created during the instantiation. b. Choose one source database from which to instantiate the shared objects at the new database by using either Data Pump or the Export and Import utilities. c. Export the shared objects at the export database. Next, perform the import at the new database. If you use the original export and import utilities, set the OBJECT_CONSISTENT export parameter and the STREAMS_INSTANTIATION import parameter to Y. The import configures the instantiation SCNs for the shared objects at the new database that have the export database as their source database. Grant privileges on the imported objects to the apply user, as appropriate. END Oracle Database 11g: Implement Streams

544 Oracle Database 11g: Implement Streams 18 - 544
Final Configuration for an Import Database (continued) d. If the new database is a source database, configure supplemental logging for the newly created shared tables. When configuring supplemental logging, set the apply tag for the sessions to a non-NULL value. This ensures that the supplemental logging commands are not applied at the other databases. After completing this step, set the apply tag back to NULL. Make sure that no changes are made to the relevant objects until after you have specified supplemental logging. e. For each source database of the new database (except for the source database that performed the export for instantiation in step c), set the instantiation SCNs for the shared objects at the new database. These instantiation SCNs must be set, and only the changes made at a source database that are committed after the corresponding SCN for that database are applied at the new database. You can set the instantiation SCNs by using the Data Pump or Import utilities, or you can set them manually with the DBMS_APPLY_ADM package. The import into the new database created the shared objects. Because capture is already configured on the new database, the instantiation information for these shared objects was captured and propagated to all destination sites of the new database, thus setting the instantiation SCNs for all destination sites of the new source database. f. If data conflicts are possible, configure conflict resolution at each site where an apply process was created. g. If any apply processes were created at the new database, start them now. Make sure that the apply user for each apply process has the proper privileges. h. If the new database is also a new source site, start the apply processes at all destination databases of the new source database. Oracle Database 11g: Implement Streams

545 Using Streams API for Rolling Database Upgrades or Migrations
Not your regular Streams environment Not for N-way replication DBMS_STREAMS_ADM.PRE_INSTANTIATION_SETUP 1 2 Instantiation 3 DBMS_STREAMS_ADM.POST_INSTANTIATION_SETUP Using Streams API for Rolling Upgrades or Migrations You can use the Oracle Streams environment for database upgrades or migration. With the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures, you configure an Oracle Streams environment that replicates changes at the database level or to specified tablespaces between two databases. These procedures must be used together, and instantiation actions must be performed manually, to complete the Oracle Streams configuration. These procedures differ from MAINTAIN_GLOBAL in that MAINTAIN_GLOBAL procedure automatically excludes database objects that are not supported by Oracle Streams from the replication environment. The PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures do not automatically exclude database objects. Instead, these procedures enable you to specify which database objects to exclude from the replication environment. If unsupported database objects are not excluded, capture errors will result. Typically, the Oracle Streams replication environment configured using these procedures serves one of the following purposes: Replicates changes to shared database objects to keep the database objects in sync at different databases 4 DBMS_STREAMS_ADM.CLEANUP_INSTANTIATION_SETUP Oracle Database 11g: Implement Streams

546 Oracle Database 11g: Implement Streams 18 - 546
Using Streams API for Rolling Upgrades or Migrations (continued) Replicates changes to database objects during a database maintenance operation, such as a database upgrade. In this case, use the CLEANUP_INSTANTIATION_SETUP procedure to remove the replication environment after the maintenance operation is complete. The PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures do not perform an instantiation. You must perform any required instantiation actions manually after running PRE_INSTANTIATION_SETUP and before running POST_INSTANTIATION_SETUP. Oracle Database 11g: Implement Streams

547 Preinstantiation Steps for Rolling Upgrade
DECLARE tablsp DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN tablsp(0) := NULL; DBMS_STREAMS_ADM.PRE_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => tablsp, source_database => 'SITE1.NET', destination_database => 'SITE2.NET', perform_actions => TRUE, capture_queue_name => 'SOURCE_queue', apply_queue_name => 'DEST_queue', include_ddl => true, start_processes => true, exclude_schemas => 'SH,TEST', exclude_flags => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_FULL); END; At capture database Preinstantiation Steps The PRE_INSTANTIATION_SETUP procedure performs the actions required before instantiation to configure a Streams replication environment. Run this procedure at the capture database, the capture database being the database that captures changes made to the source database. You can use the PRE_INSTANTIATION_SETUP procedure to configure the source and target databases to prepare for replicating changes at the database level or to specified tablespaces between two databases. The procedure arguments are very similar to the MAINTAIN_* procedure arguments. The example shown in the slide indicates the following: The entire database must be maintained by configuring replication between the local database (source_database) and the database specified in the destination_database parameter. You are not replicating individual tablespaces, but rather the entire database. The global name of the destination database is SITE2.NET. A database link from the source database to the destination database with the same name as the global name of the destination database must exist and must be accessible to the user who runs this procedure. Oracle Database 11g: Implement Streams

548 Oracle Database 11g: Implement Streams 18 - 548
Preinstantiation Steps (continued) Specify whether or not the procedure should perform the necessary actions to configure the replication environment directly or place the commands in a script. The name of the queue used by capture is SOURCE_queue. The name of the queue used by apply is DEST_queue. One-way replication from the source database to the database specified in destination_database must be configured. An Oracle Streams replication environment that maintains both DML and DDL changes must be configured. Each capture process and apply process created by this procedure is started at the end. Schema rules for the SH and TEST schemas must be added to the negative rule sets of each capture process to exclude these schemas. The negative schema rules must be configured to exclude changes to the schemas and all the database objects in the schemas. This example uses default values for the following arguments: script_name (NULL) script_directory_object (NULL) capture_name (NULL) capture_queue_table (NULL) capture_queue_user (NULL) propagation_name (NULL) apply_name (NULL) apply_queue_table (NULL) apply_queue_user (NULL) bi_directional (FALSE) After running this procedure, you must instantiate the replicated objects at the destination database site before continuing with the POST_INSTANTIATION_SETUP procedure. You can use RMAN for the instantiation process. Oracle Database 11g: Implement Streams

549 EXCLUDE_FLAGS Parameter
Can be set to: DBMS_STREAMS_ADM.EXCLUDE_FLAGS_FULL DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED EXCLUDE_FLAGS Parameter The PRE_INSTANTIATION_SETUP and the POST_INSTANTIATION_SETUP procedures contain the EXCLUDE_FLAGS parameter. You can specify one of the following values for this parameter: DBMS_STREAMS_ADM.EXCLUDE_FLAGS_FULL to exclude changes to the schemas and all the database objects in the schemas DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED to exclude changes to the database objects that are not supported by Streams in the schemas If both these values are specified, the procedure raises an error. In addition to DBMS_STREAMS_ADM.EXCLUDE_FLAGS_FULL or DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED, specify one or both of the following values: DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML to exclude data manipulation language (DML) changes made to the excluded database objects DBMS_STREAMS_ADM.EXCLUDE_FLAGS_FULL to exclude data definition language (DDL) changes made to the excluded database objects Oracle Database 11g: Implement Streams

550 Oracle Database 11g: Implement Streams 18 - 550
EXCLUDE_FLAGS Parameter (continued) Use the plus sign (+) to specify more than one of these values. For example, to maintain DML changes to the tables in schemas specified by the exclude_schemas parameter but exclude DDL changes to these schemas and the database objects in these schemas, specify the following for this parameter: DBMS_STREAMS_ADM.EXCLUDE_FLAGS_FULL + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL To exclude DML and DDL changes made to unsupported database objects in the schemas specified by the exclude_schemas parameter, specify the following for this parameter: DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + Rules for the excluded database objects are added to the negative rule set of each capture process. Therefore, changes to the excluded database objects will not be captured and replicated. This parameter is valid only if the maintain_mode parameter is set to GLOBAL and the exclude_schemas parameter is set to a non-NULL value. If the maintain_mode parameter is set to GLOBAL and the exclude_schemas parameter is set to NULL, the procedure ignores this parameter. If the maintain_mode parameter is set to TRANSPORTABLE TABLESPACES, the procedure ignores this parameter and excludes any database objects in the specified tablespace set that are not supported by Streams from the Streams configuration automatically. Also, if schemas are specified in the exclude_schemas parameter, but the exclude_flags parameter is set to NULL, the procedure does not add any rules to the negative rule set of any capture process, and the procedure includes the schemas specified in the exclude_schemas parameter in the replication environment. Oracle Database 11g: Implement Streams

551 Postinstantiation Steps at Capture Database
DECLARE tablsp DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN tablsp(0) := NULL; DBMS_STREAMS_ADM.POST_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => tablsp, source_database => 'SITE1.NET', destination_database => 'SITE2.NET', perform_actions => FALSE, script_name => 'post_setup.sql', script_directory_object => 'SCRIPT_DIR', capture_queue_name => 'SOURCE_queue', apply_queue_name => 'DEST_queue', include_ddl => true, start_processes => true, exclude_schemas => 'SH,TEST', exclude_flags => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_FULL); Parameters must match those of the Preinstantiation procedure. Postinstantiation Steps The POST_INSTANTIATION_SETUP procedure performs the actions required after instantiation to configure a Streams replication environment. Run this procedure at the capture database. You can use the POST_INSTANTIATION_SETUP procedure to finish configuring the source and target databases for replicating changes at the database level or to specified tablespaces between two databases. When the POST_INSTANTIATION_SETUP procedure is run, the parameter values must match the parameter values specified when the corresponding PRE_INSTANTIATION_SETUP procedure was run, except for the values of the following parameters: perform_actions script_name script_directory_object start_processes In this example, the parameter values used in the call to POST_INSTANTIATION_SETUP are the same as the parameter values used in the PRE_INSTANTIATION_SETUP procedural call, but the commands are written to the post_setup.sql script instead of being executed directly against the databases involved. Oracle Database 11g: Implement Streams

552 Removing Preinstantiation and Postinstantiation: Cleanup Procedure
DECLARE tablsp DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN tablsp(0) := NULL; DBMS_STREAMS_ADM.CLEANUP_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => tablsp, source_database => 'SITE1.NET', destination_database => 'SITE2.NET', perform_actions => FALSE, script_name => 'cleanup.sql', script_directory_object => 'SCRIPT_DIR', capture_queue_name => 'SOURCE_queue', apply_queue_name => 'DEST_queue', bi_directional => false, change_global_name => TRUE); END; Cleanup Procedure You can use the CLEANUP_INSTANTIATION_SETUP procedure to remove an Oracle Streams replication configuration that was set up by the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures. This procedure must be run at the database used as the source database when calling the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures. When the CLEANUP_INSTANTIATION_SETUP procedure is run, the parameter values must match the parameter values specified when the corresponding PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures were run, except for the values of the following parameters: perform_actions script_name script_directory_object The CLEANUP_INSTANTIATION_SETUP procedure does not use the following parameters: include_ddl start_processes exclude_schemas exclude_flags Oracle Database 11g: Implement Streams

553 Oracle Database 11g: Implement Streams 18 - 553
Cleanup Procedure (continued) The CLEANUP_INSTANTIATION_SETUP procedure contains an additional parameter, change_global_name, which when set to TRUE, changes the global name of the database specified in destination_database to match the global name of the current database. This allows you to easily migrate to the new (destination) database following a database upgrade or patching operation, or to switch to the new database after successfully migrating or upgrading your applications. The CLEANUP_INSTANTIATION_SETUP procedure is similar to database maintenance operations. For example, if you are migrating your database to a new operating system and want to avoid down time, you use the PRE_INSTANTIATION procedure to set up Streams replication, an instantiation method to copy over the data, and the POST_INSTANTIATION procedure to synchronize the data between the source and destination database. Then, when finished, you use the CLEANUP_INSTANTIATION_SETUP procedure to remove the Streams components from both databases. Oracle Database 11g: Implement Streams

554 Creating a New Streams Site by Using RMAN
Copying and instantiating a new database: With backups: RMAN DUPLICATE Without backups: RMAN DUPLICATE FROM ACTIVE DATABASE Removing the copied Streams configuration at the destination Creating a new Streams configuration at the destination RMAN Creating a New Streams Site by Using RMAN When instantiating an entire database for Oracle Streams, RMAN can perform the copying of database structures and metadata much faster than Export and Import utilities. You can use RMAN to instantiate a new apply site without stopping the Streams activity on the source database. You use the RMAN DUPLICATE command to create a duplicate database from a backup of a source database (either backup set or image copy). Even easier, you can use the DUPLICATE … FROM ACTIVE DATABASE command to create a duplicate database over the network without the need for preexisting database backups. To use the duplicate database within Streams, confirm that the global name for each database is unique. When you use RMAN to instantiate a database, you perform the following general steps: Copy the entire source database to the destination site by using RMAN. Remove the Streams configuration at the destination site by using the DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION procedure. Configure the Streams destination site, including the creation of at least one apply process, to apply changes made at the source database during the instantiation process. Note: It is recommended that you do not use RMAN for instantiation in an environment where distributed transactions are possible. Doing so may cause “in-doubt transactions” that must be corrected manually. SITE1 source SITE2 destination Oracle Database 11g: Implement Streams

555 Oracle Database 11g: Implement Streams 18 - 555
Creating a New Streams Site by Using RMAN (continued) Creating a New Streams Site by Using the RMAN DUPLICATE … FROM ACTIVE DATABASE Command Oracle Database 11g greatly simplifies this process. You can instruct the source database to perform online image copies and archived log copies directly to the auxiliary instance by using Enterprise Manager or the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command. Preexisting backups are no longer needed. The database files are copied from the source to a destination or AUXILIARY instance via an interinstance network connection. RMAN then uses a “memory script” (one that is contained only in memory) to complete recovery and open the database. Usage Notes for Active Database Duplication Oracle Net must be aware of the source and destination databases. The FROM ACTIVE DATABASE clause implies network action. If the source database is open, it must have archive logging enabled. If the source database is in mounted state (and not a standby), the source database must have been shut down cleanly. Availability of the source database is not affected by active database duplication. But the source database instance provides CPU cycles and network bandwidth. Password files are copied to the destination. The destination must have the same SYS user password as the source. That is, at the beginning of the active database duplication process, both databases (source and destination) must have password files. When you use the command line and do not duplicate for a standby database, you must use the PASSWORD clause (with the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command). Prior to Oracle Database 11g, the SPFILE was not copied because it requires alterations appropriate for the destination environment. You had to copy the SPFILE into the new location, edit it, and specify it when starting the instance in NOMOUNT mode or on the RMAN command line to be used before opening the newly copied database. Customizing the Destination Database With Oracle Database 11g, you provide your list of parameters and desired values and the system sets them. The most obvious parameters are those whose value contains a directory specification. All parameter values that match your choice (with the exception of the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters) are placed. Note the case-sensitivity of parameters: The case must match for PARAMETER_VALUE_CONVERT. With the FILE_NAME_CONVERT parameters, pattern matching is operating system specific. This functionality is equivalent to pausing the database duplication after restoring the SPFILE and issuing ALTER SYSTEM SET commands to modify the parameter file (before the instance is mounted). The directory structure must be in place with the proper permission. Enterprise Manager Interface In Enterprise Manager, select Data Movement > Clone Database. Oracle Database 11g: Implement Streams

556 Creating a New Streams Site by Using the RMAN DUPLICATE Command
1. Create an RMAN backup of the source site. 3. Create a database link from the source to the destination. 2. Create a staging queue at the source site. 4. Define global propagation from the source queue to the destination queue. 5. Disable the propagation. Creating a New Streams Site by Using the RMAN DUPLICATE Command The Oracle Streams Replication Administrator's Guide contains a detailed explanation of each step to perform when using RMAN to instantiate a database. The following steps are discussed only at a high level in this course: 1. Create a backup of the source database. 2. While connected in SQL*Plus as the Streams administrator at the source database, create a SYS.AnyData queue to stage the changes from the source database if such a queue does not already exist. This queue will stage changes that are propagated to the destination database after it has been configured. 3. Create a database link from the source site to the destination site. The database parameter GLOBAL_NAMES may need to be altered on the source site. 4. Create a propagation from the source queue to the destination queue at the destination database by using ADD_GLOBAL_PROPAGATION_RULES. The destination queue does not exist yet, but creating this propagation ensures that messages enqueued into the source queue will remain staged there until propagation is possible. In addition to captured LCRs, the source queue will stage internal messages that will populate the Streams data dictionary at the destination database. 5. Disable the newly created global propagation. Oracle Database 11g: Implement Streams

557 Creating a New Streams Site by Using the RMAN DUPLICATE Command
Does capture with global rules exist at the source? Yes 8. Get the current SCN at the source database. No 6a. Create a capture process using global rules at the source. 9. Archive the current online redo log at the source. 6b. Start the new capture process. 10. Prepare the OS environment for database duplication. 7. Prepare the entire database for instantiation. Creating a New Streams Site by Using the RMAN DUPLICATE Command (continued) 6a. If there is no capture process that captures all the changes to the source database, create this capture process with the ADD_GLOBAL_RULES procedure in the DBMS_STREAMS_ADM package. Running this procedure automatically prepares the entire source database for instantiation. 6b. Start the newly created capture process. 7. If such a capture process already exists, make sure that the source database has been prepared for instantiation by querying the DBA_CAPTURE_PREPARED_DATABASE data dictionary view. 8. Determine the value for the until_scn argument used by the RMAN DUPLICATE command, as in the following example: SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM dual; 11. Use the RMAN DUPLICATE command. Oracle Database 11g: Implement Streams

558 Oracle Database 11g: Implement Streams 18 - 558
Creating a New Streams Site by Using the RMAN DUPLICATE Command (continued) 9. Connect to the source database as a system administrator in SQL*Plus and archive the current online redo log: ALTER SYSTEM ARCHIVE LOG CURRENT; 10. Prepare your environment for database duplication, which includes preparing the destination database as an auxiliary instance for duplication. Note: For instructions, see the Oracle Database Backup and Recovery Advanced User's Guide. Perform the following tasks: a. Create an Oracle password file for the auxiliary instance. b. Configure Oracle Net connectivity to the auxiliary instance. c. Create an initialization parameter file for the auxiliary instance. d. Set ORACLE_SID to the SID of the auxiliary instance, and then use SQL*Plus to start the auxiliary instance in MOUNT mode. e. Verify that the source database is started (either mounted or open). f. Start RMAN with a connection to the target database, the auxiliary instance, and (if applicable) the recovery catalog database. If you do not have automatic channels configured, manually allocate at least one auxiliary channel before issuing the DUPLICATE command within the same RUN block. The channel type (DISK or sbt) must match the media where the backups of the target database are located. If the backups reside on disk, the more channels you allocate, the less time it will take to complete the duplication. For tape backups, limit the number of channels to the number of devices available. 11. Use the RMAN DUPLICATE command with the OPEN RESTRICTED option to instantiate the source database at the destination database. The OPEN RESTRICTED option is required. This option enables a restricted session in the duplicate database immediately before the duplicate database is opened. Make sure that you use TO <database_name> in the DUPLICATE command to specify the name of the duplicate database. You can use the UNTIL SCN clause to specify an SCN for the duplication. Use the SCN determined in step 8. The until SCN specified for the RMAN DUPLICATE command must be greater than the SCN when the database was prepared for instantiation in steps 6 or 7. Also, archived redo logs must be available for the until SCN specified and for greater SCN values, which is why the redo log containing the until SCN was archived in step 9. Oracle Database 11g: Implement Streams

559 Creating a New Streams Site by Using the RMAN DUPLICATE Command
12. Alter the global name of the duplicate database. 16. Configure an apply process on the duplicate database. 13. Remove the Streams configuration from the duplicate database. 17. Set the global instantiation SCN at the duplicate database. 14. End the restricted session at the duplicate database. 18. Start the apply process at the duplicate database. 15. Create the destination queue at the duplicate database. 19. Enable the global propagation at the source database. Creating a New Streams Site by Using the RMAN DUPLICATE Command (continued) 12. At the destination database, connect as an administrative user in SQL*Plus and rename the database global name. After the RMAN DUPLICATE command, the destination database has the same global name as the source database. Make sure you are connected to the destination database, not the source database, when you run the DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION procedure because it removes the local Streams configuration. Note: Any supplemental log groups for the tables at the source database are retained at the destination database, and the REMOVE_STREAMS_CONFIGURATION procedure does not drop them. You may drop these supplemental log groups if the duplicate database will not contain a capture process. 14. At the destination database, use the ALTER SYSTEM statement to disable RESTRICTED SESSION that was enabled by RMAN. 15. At the destination database, use the SET_UP_QUEUE procedure to create the destination queue that was specified as the destination of the global propagation in step 4. Oracle Database 11g: Implement Streams

560 Oracle Database 11g: Implement Streams 18 - 560
Creating a New Streams Site by Using the RMAN DUPLICATE Command (continued) 16. At the destination database, connect as the Streams administrator and configure the Streams environment. If you create any apply processes, do not start them at this time. 17. At the destination database, use the SET_GLOBAL_INSTANTIATION_SCN procedure of DBMS_APPLY_ADM to set the global instantiation SCN for the source database. The RMAN DUPLICATE command duplicates the database up to one less than the SCN value specified in the UNTIL SCN clause. Therefore, you must subtract one from the until SCN value that you specified when you ran the DUPLICATE command in step 11. Make sure that the recursive parameter is set to TRUE to set the instantiation SCN for all schemas and tables in the destination database. 18. At the destination database, start any apply processes that you configured. 19. At the source database, enable the propagation that you disabled in step 5. Oracle Database 11g: Implement Streams

561 Oracle Database 11g: Implement Streams 18 - 561
Summary In this lesson, you should have learned how to: Add a new database object to a single-source Streams environment Add a new destination site to a single-source Streams configuration Add a new database object to a multiple-source Streams environment Add a destination or source site to a multiple-source Streams configuration Instantiate an entire database by using RMAN and Oracle Streams Oracle Database 11g: Implement Streams

562 Oracle Database 11g: Implement Streams 18 - 562
Practice 18 Overview: Configuring and Extending Your Schema Replication This practice covers the configuration and extending of the HR schema replication by adding a new shared table. DDL Replication EURO database AMER database HR_CAP_Q HR schema HR_APPLY_Q HR_PROPAGATION HR_CAP HR_CAP_2 HR_APPLY HR_APPLY_2 HR_PROPAGATION_2 Oracle Database 11g: Implement Streams

563 Best Practices and Operational Considerations

564 Oracle Database 11g: Implement Streams 18 - 564
Objectives After completing this lesson, you should be able to: Implement best practices for configuring a Streams environment Identify key operational tasks for maintaining a Streams environment Identify and follow best practices for captured messages Use triggers in a Streams environment Identify and follow best practices for rules Describe backup and recovery issues for a Streams environment and follow best practice with the use of RMAN Describe interoperability issues Oracle Database 11g: Implement Streams

565 Best Practices for Streams Database Configuration
Use separate staging queues for: Each capture process Each apply process Configure a Streams pool of 200 MB (or higher). Determine checkpoint retention time (for automatic purging). Protect your data. Use the DBMS_STREAMS_AUTH package to manage Streams administrator privileges. Best Practices for Streams Database Configuration Create Multiple Staging Queues Each Streams queue used by a capture process or apply process should contain captured LCRs from at most one capture process. If apply processes exist from multiple source databases, you must configure separate queues for each apply process. Each Streams queue must be created in a separate queue table. A single database capture and apply configuration, where changes are captured to a specific queue and immediately applied from that queue, is the only exception to this recommendation. Configure a Streams pool of 200 MB (or higher) and Memory Management For details, see the lesson titled Overview of Oracle Streams. Determine Checkpoint Retention You can use the ALTER_CAPTURE procedure to set checkpoint_retention_time to the number of days that a capture process must retain checkpoints before purging them automatically. Oracle Database 11g: Implement Streams

566 Oracle Database 11g: Implement Streams 18 - 566
Best Practices for Streams Database Configuration (continued) Protect Your Data The same techniques that are used to make a single database resilient to failures also apply to distributed hub databases. Oracle Corporation recommends Oracle Real Application Clusters to provide protection from instance and node failures. This configuration must be combined with a “no-loss” physical standby database to protect from disasters and data errors. Oracle Corporation does not recommend using a Streams replica as the only means to protect from disasters. Manage Streams Administer User Privileges You can grant privileges to a local Streams administrator by running the GRANT_ADMIN_PRIVILEGE procedure in the DBMS_STREAMS_AUTH package. In addition to granting all the necessary privileges to the user (with the exception of the DBA role), this procedure adds the user to the DBA_STREAMS_ADMINISTRATOR view, when the privileges are granted directly to the user by the procedure and not through a generated script. Use the REVOKE_ADMIN_PRIVILEGE procedure to revoke privileges from a user. When you revoke privileges from a user using this procedure, the user is removed from the DBA_STREAMS_ADMINISTRATOR view. Oracle Database 11g: Implement Streams

567 Oracle Database 11g: Implement Streams 18 - 567
Archive Logging Archiving must be enabled at all source sites. A source site requires extra disk space for: Supplemental logging Retention of archived redo logs The following redo log files must be available to the capture process: All logs containing redo records for an SCN greater than or equal to the required checkpoint SCN All logs containing redo records for an SCN greater than or equal to the first SCN when starting a new capture process Archive Logging A local capture process reads online redo logs whenever possible and archived redo log files otherwise. A real-time downstream capture process reads standby redo logs or archived redo log files from its source database, depending on configuration. For this reason, the source database must be running in ARCHIVELOG mode when a capture process is configured to capture changes. You must keep an archived redo log file available until you are certain that no capture process will need that file. Standby redo logs must be configured when you use downstream_real_time_mine = Y (that is, when you configure real-time mining for downstream capture). When a capture process is restarted, it scans the redo log from the required checkpoint SCN forward and may require redo log files with a FIRST_CHANGE# value that is less than the start SCN for the capture process. The removal of required redo log files before they are scanned by a capture process causes the capture process to abort. A capture process must have access to the redo log file that includes the required checkpoint SCN and all subsequent redo log files. A capture process will never need the redo log files that contain information prior to its first SCN. Oracle Database 11g: Implement Streams

568 Oracle Database 11g: Implement Streams 18 - 568
Archive Logging (continued) You can query the DBA_CAPTURE data dictionary view to determine the first SCN, start SCN, and the required checkpoint SCN for each capture process. SQL> SELECT capture_name, status, first_scn, start_scn, required_checkpoint_scn, captured_scn 3 FROM DBA_CAPTURE; CAPTURE_NAME STATUS FIRST_SCN START_SCN REQUIRED_CHECKPOINT_SCN CAPTURED_SCN CAPTURE ENABLED Oracle Database 11g: Implement Streams

569 Sharing LogMiner Data Dictionaries
Instance SGA Streams pool LogMiner data dictionary LogMiner data dictionary BUILD for CP02 BUILD for CP01 Sharing LogMiner Data Dictionaries There can be more than one LogMiner data dictionary for a particular source database. If there are multiple capture processes capturing changes from the source database, two or more capture processes can share a LogMiner data dictionary, or each capture process can have its own LogMiner data dictionary. If the LogMiner data dictionary needed by a capture process does not exist, the capture process populates it using information in the redo log when the capture process is started for the first time. The DBMS_CAPTURE_ADM.BUILD procedure extracts data dictionary information to the redo log. This procedure also identifies a valid first SCN value that can be used to create a capture process. You can perform a build of data dictionary information in the redo log multiple times, and a particular build may or may not be used by a capture process to create a LogMiner data dictionary. The amount of information extracted to a redo log when you run the BUILD procedure depends on the number of database objects in the database and can be quite large. Therefore, you must run the BUILD procedure only when necessary. If you want to create multiple capture processes that capture changes to the same source database, the capture processes can either create their own LogMiner data dictionary or share one of the existing LogMiner data dictionaries with one or more other capture processes. CP01 CP02 924 890 868 837 816 Oracle Database 11g: Implement Streams

570 Oracle Database 11g: Implement Streams 18 - 570
Sharing LogMiner Data Dictionaries (continued) Whether a new LogMiner data dictionary is created for a new capture process depends on the setting for the first_scn parameter when you run CREATE_CAPTURE to create a capture process: If first_scn = NULL (default), the new capture process attempts to share a LogMiner data dictionary. If first_scn != NULL, the new capture process uses a new LogMiner data dictionary that is created when the new capture process is started for the first time. When you create a capture process and specify a non-NULL first_scn parameter value, this value should correspond to a data dictionary build in the redo log obtained by running the DBMS_CAPTURE_ADM.BUILD procedure. You can find the first SCN generated by the BUILD procedure by running the following query: SELECT DISTINCT FIRST_CHANGE#, NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES'; The most important factor to consider when deciding whether a new capture process should share an existing LogMiner data dictionary or create a new one is the difference between the maximum checkpoint SCN values of the existing capture processes and the start SCN of the new capture process. If the new capture process shares a LogMiner data dictionary, it must scan the redo log from the point of the maximum checkpoint SCN of the shared LogMiner data dictionary onward, even though the new capture process cannot capture changes prior to its first SCN. If the start SCN of the new capture process is much higher than the maximum checkpoint SCN of the existing capture process, the new capture process must scan a large amount of redo data before it reaches its start SCN. Follow these guidelines when you decide whether a new capture process should share an existing LogMiner data dictionary or create a new one: If one or more maximum checkpoint SCN values are greater than the start SCN you want to specify, and if this start SCN is greater than the first SCN of one or more existing capture processes, it may be better to share the LogMiner data dictionary of an existing capture process. In this case, you can assume there is a checkpoint SCN that is less than the start SCN and that the difference between this checkpoint SCN and the start SCN is small. The new capture process will begin scanning the redo log from this checkpoint SCN and will catch up to the start SCN quickly. If no maximum checkpoint SCN is greater than the start SCN, and if the difference between the maximum checkpoint SCN and the start SCN is small, it may be better to share the LogMiner data dictionary of an existing capture process. The new capture process will begin scanning the redo log from the maximum checkpoint SCN, but it will catch up to the start SCN quickly. If no maximum checkpoint SCN is greater than the start SCN, and if the difference between the maximum checkpoint SCN and the start SCN is large, it may take a long time for the capture process to catch up to the start SCN. In this case, you can create a new LogMiner data dictionary for the new capture process. It will take some time to create the new LogMiner data dictionary when the new capture process is first started, but the capture process can specify the same value for its first SCN and start SCN, and thereby avoid scanning a large amount of redo data unnecessarily. Oracle Database 11g: Implement Streams

571 Purging the LogMiner Data Dictionary
Capture uses a LogMiner data dictionary. Checkpoints are retained as specified with the checkpoint_retention_time parameter. Use the ALTER_CAPTURE procedure to modify retention time. Capture checkpoint functionality: Older checkpoints are purged. The FIRST_SCN value moves forward. LogMiner data dictionary is purged and space reclaimed. First SCN Start SCN Required checkpoint SCN Applied SCN Checkpoint retention time (in days) Purging the LogMiner Data Dictionary The checkpoint retention time for a capture process controls the amount of checkpoint data that it retains. The checkpoint retention time specifies the number of days before the required checkpoint SCN to retain checkpoints. The default value for the checkpoint retention time is 60 days. When a checkpoint is older than the specified time period, the capture process purges the checkpoint. When checkpoints are purged, the first SCN for the capture process moves forward and the Oracle Streams metadata tables before this new first SCN is purged. The space used by these tables (usually in the SYSAUX tablespace) is reclaimed. The first SCN is the lowest possible SCN available for capturing changes. To alter the checkpoint retention time for a capture process, use the ALTER_CAPTURE procedure in the DBMS_CAPTURE_ADM package and specify the new retention time with the checkpoint_retention_time parameter. Log files with SCNs below this first SCN are never needed by the capture process and can be identified in the DBA_REGISTERED_ARCHIVED_LOG view with YES in the PURGEABLE column. Oracle Database 11g: Implement Streams

572 Best Practices for Streams Database Operation
Follow the best practices for the global name of an Oracle Streams database. Adjust the automatic collection of optimizer statistics. Monitor performance and make adjustments when necessary. Check the alert log for Streams information. Monitor the capture process and synchronous capture queues for size. Follow Streams best practices for backups. Follow the best practices for replicating DDL changes. Follow the best practices for removing a Streams configuration. Best Practices for Streams Database Operation The following checklist relates to key operational tasks: For more information about GLOBAL_NAME, see the lesson titled “Overview of Oracle Streams.” Statistics are in their respective “detailed” lessons, such as capture statistics in “Configuring a Capture Process” and apply statistics in “Apply Concepts and Configuration.” For performance monitoring and alert log, see the lesson titled “Monitoring and Troubleshooting Oracle Streams.” More details about queue sizing and backups are covered in this lesson. When replicating data definition language (DDL) changes, do not allow system-generated names for constraints or indexes. Modifications to these database objects will most likely fail at the destination database because the object names at different databases will not match. Also, storage clauses must be identical. If you decide not to replicate DDL in your Streams environment, any table structure changes must be performed manually at each database in the environment. If you want to completely remove the Oracle Streams configuration at a database, complete the following steps: 1. Connect to the database as an administrative user, and run the DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION procedure. 2. Drop the Oracle Streams administrator, if possible. Oracle Database 11g: Implement Streams

573 Best Practices for Captured Messages
Do not rename the source database after a capture process has started capturing changes. Each table for which changes are applied by an apply process must have a primary key. Each column that you specify as a substitute key column must be a NOT NULL column and must be indexed. Do not permit updates to the primary-key or substitute-key columns. Best Practices for Captured Messages Both row LCRs and DDL LCRs contain the source database name of the database where a change originated. If captured LCRs will be propagated by a propagation or applied by an apply process, to avoid propagation and apply problems, do not rename the source database after a capture process has started capturing changes. To detect conflicts and handle errors accurately, Oracle Database must be able to identify uniquely and match corresponding rows at different databases. By default, Streams uses the primary key of a table to identify rows in the table, and if a primary key does not exist, Streams uses the smallest unique index that has at least one NOT NULL column to identify rows in the table. When a table at a destination database does not have a primary key or a unique index with at least one NOT NULL column, or when you want to use columns other than the primary key or unique index for the key, you can designate a substitute key at the destination database. Each column that you specify as a substitute- key column must be a NOT NULL column. You must also create a single index that includes all the columns in a substitute key. Following these guidelines improves performance for changes because the database can locate the relevant row more efficiently. Oracle Database 11g: Implement Streams

574 Oracle Database 11g: Implement Streams 18 - 574
Source Queue Growth There are two common reasons for source queue growth: A message cannot be propagated to a specified destination queue. One or more destination databases acknowledge successful propagation of messages much more slowly than the other databases. User-enqueued messages are not being dequeued. Recommendation: Consider using the Streams split-and-merge functionality Source Queue Growth The common reasons for Streams queue growth at the source site are the following: A message cannot be propagated to a specified destination queue for some reason, such as network problems. This situation could cause the source queue to grow large. The DBA must monitor the source queues on a regular basis to detect problems early. One or more destination databases acknowledge successful propagation of messages much more slowly than the other databases. In this case, the source queue can grow because the slower destination databases create a backlog of messages that have already been acknowledged by the faster destination databases. In an environment such as this, you should consider creating more than one capture process to capture changes at the source database. You can then use one source queue for the slower destination databases and another queue for the faster destination databases. User-enqueued messages remain in the queue until they are consumed by all the intended consumers. By default, messages are set to never expire. However, you can enqueue messages by using DBMS_AQADM.ENQUEUE and configure the expiration message property, which specifies the interval of time that the message is available for dequeuing. If the message expires, it is moved to an exception queue. Oracle Database 11g: Implement Streams

575 Oracle Database 11g: Implement Streams 18 - 575
NOLOGGING Operations NOLOGGING or UNRECOVERABLE operations do not write to the redo logs and thus cannot be captured. FORCE LOGGING prevents database changes from being missed by Streams. To change database operations from NOLOGGING to FORCE LOGGING: STARTUP MOUNT ALTER DATABASE FORCE LOGGING OR ALTER TABLESPACE <name> FORCE LOGGING FORCE LOGGING must be configured again after you re-create the control file. NOLOGGING Operations The NOLOGGING clause can cause certain database operations to not generate redo records to the redo logs. Specifying NOLOGGING can speed up operations that can be easily recovered outside of the database recovery mechanisms. If you specify FORCE LOGGING, the writing of redo records is forced, even where NOLOGGING has been specified in DDL statements. However, redo records for temporary tablespaces and temporary segments are never generated, so forced logging has no effect on these objects. To put the database into FORCE LOGGING mode, use the FORCE LOGGING clause in the CREATE DATABASE or ALTER DATABASE statement. If you do not specify this clause, the database is not placed into FORCE LOGGING mode. You can also specify FORCE LOGGING or NO FORCE LOGGING at the tablespace level. However, if FORCE LOGGING mode is in effect for the database, it takes precedence over the tablespace mode setting. It is recommended that either the entire database or individual tablespaces be placed into FORCE LOGGING mode, but not both. The FORCE LOGGING mode is a persistent attribute of the database. That is, if the database is shut down and restarted, it remains in the same logging mode state. However, if you re-create the control file, the database is not restarted in FORCE LOGGING mode unless you specify the FORCE LOGGING clause in the CREATE CONTROL FILE statement. Oracle Database 11g: Implement Streams

576 Clock Synchronization
Use GMT. Or: Ensure that system clocks are synchronized so that conflict resolution is handled correctly when multiple sites have the ability to update the same data. Alternatively, use the TIMESTAMP WITH TIME ZONE data type for the time-based column when configuring conflict resolution based on the latest time. Clock Synchronization If all databases use, for example, Greenwich Mean Time (GMT), there is no need to synchronize clocks. When configuring conflict resolution based on the time of a database change, you must pay attention to the different times for each system or use the same global time for all sites. For example, if a database in London updates a table at 2:00 PM local time, and a database in New York updates the same row at 2:00 PM local time, were these rows updated at the same time or not? Data types such as TIMESTAMP and TIMESTAMP WITH TIME ZONE make it easier to determine which site updated the row first or last. For example, by using a column named updt_time, which is of the TIMESTAMP WITH TIME ZONE data type, the following code fragment can be used in conflict resolution to set the new.updt_time variable to either the current time or old.updt_time, whichever is greater: IF :old.updt_time IS NULL OR :old.updt_time < SYSTIMESTAMP THEN :new.updt_time := SYSTIMESTAMP; ELSE :new.updt_time := :old.updt_time; END IF; Oracle Database 11g: Implement Streams

577 Integrating Triggers with Streams
Application of row LCRs to a shared table causes triggers that are defined on the table to fire. The SET_TRIGGER_FIRING_PROPERTY procedure in the DBMS_DDL package controls how a trigger is fired. A setting of “fire once” means not to fire for: Data changes performed via the apply process Execution of one or more apply errors Default value is “fire once” for DML and DDL triggers. Integrating Triggers with Streams If you define a trigger on a shared table, and this trigger exists at all sites where the shared table exists, the trigger may be fired multiple times for a single update to a table. Here is an example: 1. The table at SITE1.NET is updated, causing a trigger to fire. 2. The DML change is propagated to SITE2.NET. 3. The apply process at SITE2.NET applies the change to the shared table, causing the trigger to fire again. You can control the firing property of a DML or DDL trigger by using the SET_TRIGGER_FIRING_PROPERTY procedure in the DBMS_DDL package. By using this procedure, you can specify whether a trigger’s firing property is set to fire once. If a trigger’s firing property is set to fire once, it does not fire in the following cases: When a relevant change is made by an apply process When a relevant change results from the execution of one or more apply errors by using the EXECUTE_ERROR or EXECUTE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package If a trigger is not set to fire once, it fires in both of these cases. Oracle Database 11g: Implement Streams

578 SET_FIRING_PROPERTY Procedure
DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY trig_owner IN VARCHAR2, trig_name IN VARCHAR2, fire_once IN BOOLEAN); Example: Configure a trigger to fire for both local changes and changes generated by the apply process. EXEC DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY ( - 'HR', 'UPDATE_JOB_HISTORY', FALSE); SET_FIRING_PROPERTY Procedure This procedure sets the specified DML or DDL trigger’s firing property. Regardless of the firing property set by this procedure, a trigger continues to fire when changes are made by means other than the apply process or apply error execution. For example, if a user session or an application makes a change, the trigger continues to fire, regardless of the firing property. In the example in the slide, changes to the HR.JOBS table are captured and applied at two sites. The trigger exists on both sites, but the HR.JOB_HISTORY table is not replicated. In this case, you would want the trigger to fire at both sites, so the update to the JOB_HISTORY table is performed at both sites. Note If you dequeue an error transaction from the error queue and execute it without using the DBMS_APPLY_ADM package, relevant changes resulting from this execution cause a trigger to fire, regardless of the trigger-firing property. Only DML and DDL triggers can be set to fire once. All other types of triggers are always executed. Oracle Database 11g: Implement Streams

579 Firing Property of Triggers
You can check the firing property of a trigger by using the IS_TRIGGER_FIRE_ONCE function in the DBMS_DDL package. DECLARE res BOOLEAN; BEGIN res := DBMS_DDL.IS_TRIGGER_FIRE_ONCE( 'HR', 'UPDATE_JOB_HISTORY'); IF res THEN DBMS_OUTPUT.PUT_LINE('Fire once? YES'); ELSE DBMS_OUTPUT.PUT_LINE('Fire once? NO'); END IF; END; Firing Property of Triggers By default, DML and DDL triggers are set to fire once. You can check a trigger’s firing property by using the IS_TRIGGER_FIRE_ONCE function in the DBMS_DDL package. Consider the HR.UPDATE_JOB_HISTORY trigger, which adds a row to the JOB_HISTORY table when data is updated in the JOB_ID or DEPARTMENT_ID column in the EMPLOYEES table. If the UPDATE_JOB_HISTORY trigger is not set to fire once, the following could happen: The JOB_ID column is updated for an employee in the EMPLOYEES table at SITE1. The UPDATE_JOB_HISTORY trigger fires at SITE1 and adds a row to the JOB_HISTORY table. The capture process at SITE1 captures the changes to both the EMPLOYEES and JOB_HISTORY tables, and the changes are propagated to SITE2. An apply process at the SITE2 database applies both changes. The UPDATE_JOB_HISTORY trigger fires at SITE2 when the apply process updates the EMPLOYEES table, and a second row is added to the JOB_HISTORY table. Oracle Database 11g: Implement Streams

580 Best Practices for Rules
Use simple rules. Do not use empty rule sets. Manage rules efficiently. Create rules so that only one rule in a positive rule set can evaluate to TRUE for any given condition. Set include_tagged_lcr to TRUE for negative rules. Specify a source database for propagation rules. Subset rules: Place subset rules in positive rule sets only. Use them for only those tables that are true data subsets of the source table. Best Practices for Rules Simple rules can take advantage of rule-evaluation optimizations. If you need to exclude tables from being captured, use table-level rules for only those objects for which you want to replicate changes, or use negative rules to exclude those objects for which you do not want to replicate changes. An empty rule set is not the same as no rule set at all. By not using rule sets, or by using empty rule sets, you may get unexpected results. Here are some examples: An apply process with no assigned rule set applies all changes. An apply process with an empty positive rule set does not apply any changes. An apply process with only an empty negative rule set applies all changes. You can use propagation or apply without a rule set. Using an apply process or propagation schedule without a defined rule set eliminates the need to perform rule evaluations, resulting in higher throughput. Only the capture process must have at least one rule set associated with it to avoid capturing unsupported data types. Use the DBA_STREAMS_RULES views for information about the rules used by all Streams processes in the database. The STREAMS_TYPE column distinguishes between the following values: CAPTURE, SYNC_CAPTURE, PROPAGATION, APPLY, and DEQUEUE. Oracle Database 11g: Implement Streams

581 Oracle Database 11g: Implement Streams 18 - 581
Best Practices for Rules (continued) Use the procedures from the DBMS_STREAMS_ADM package to configure rules for Oracle Streams whenever possible. Using these procedures automates many of the tasks required for implementing a Streams replication environment. In addition, the procedures in this package populate data dictionary views with information about the original configuration. Also, you must use the same package to manage the rule that you used to create the rule. For example, rules created with the DBMS_STREAMS_ADM package must be removed by the REMOVE_RULE procedure of DBMS_STREAMS_ADM. This procedure maintains the configuration information used by Streams as well as removing the specified rule from the rule set. The REMOVE_RULE procedure of DBMS_RULE_ADM removes the rule from the specified rule set but does not remove the information from the DBA_STREAMS_*_RULES views. You must create rules and place them in rule sets so that only one rule in a positive rule set can evaluate to TRUE for any condition. By following this guideline, you can avoid unpredictable results if multiple rules in the rule set use action contexts, or if a non-NULL action context is added to a rule in the rule set. By setting the include_tagged_lcr parameter to TRUE for negative rules, all changes for the specified table, schema, or database that satisfy the rule condition, including tagged LCRs, are discarded. When you create propagation rules for captured messages, specify a source database for the changes. An apply process uses transaction control messages to assemble captured messages into committed transactions. These transaction control messages, such as COMMIT and ROLLBACK, contain the name of the source database where the message occurred. To avoid unintended cycling of these messages, propagation rules must contain a condition that includes the source database. Subset rules must reside only in positive rule sets. Do not add subset rules to negative rule sets because the results are unpredictable. If a rule evaluates to FALSE in a negative rule set, the LCR is not discarded. However, row migration (changing an update operation to either an insert or delete operation) is not performed on these LCRs because the rule evaluated to FALSE, and thus the action context is not invoked. Also, rules in the negative rule set that evaluate to TRUE are discarded, so again, row migration does not occur. To apply row LCRs that have been transformed by a row migration, the table at the destination database must be a subset table in which each row matches the condition in the subset rule. If the table is not such a subset table, apply errors may result. Similarly, if an apply process may apply row LCRs that have been transformed by row migration, and if you allow users or applications to perform DML operations on the table, you must allow only those DML changes that satisfy the subset condition. If you allow local changes to the table, the apply process cannot ensure that all rows in the table meet the subset condition. Oracle Database 11g: Implement Streams

582 Oracle Database 11g: Implement Streams 18 - 582
Replicating DDL LCRs Explicitly grant privileges to the apply user. Filter out certain DDL operations at destination sites. Set the instantiation SCN at the schema or global level at each destination site. Replicating DDL LCRs 1. Ensure that all necessary system and object privileges are granted explicitly to the apply user. Granting privileges through roles is not sufficient. Explicitly grant the privilege with the GRANT command. 2. Modify any manual scripts that manipulate tablespaces to set a Streams tag. The combination of setting a Streams tag at the source site and configuring rules at the apply site to ignore the tagged DDL LCRs prevents the replication of inappropriate DDL commands to the destination site. For example, if you need to alter a tablespace offline at a source site, you may not want the same tablespace taken offline at the destination site. If you do not replicate DDL commands in your Streams environment, any table structure changes must be performed manually at all sites. 3. Configure the instantiation SCNs at the schema or global level at the destination sites when replicating DDL changes. Otherwise, the apply process generates errors when attempting to apply the DDL changes. To replicate GRANT commands, you must set the instantiation SCN at the global level. Oracle Database 11g: Implement Streams

583 Best Practices for Performing Backups of Your Streams Environment
Recommendation: Use RMAN, which manages all backup aspects. For user-managed online backups: Use Non-NULL Streams tags Force checkpoints Protect and retain archived logs Use COMMIT_SERIALIZATION of FULL for apply Implement a “heartbeat” table. Best Practices for Performing Backups of Your Streams Environment Ensure that your manual backup procedures use a non-NULL Streams tag with: ALTER TABLESPACE ... BEGIN BACKUP ALTER TABLESPACE ... END BACKUP The tag should be chosen so that these DDL commands are ignored by the capture rule set. Backups performed using RMAN do not need to set a Streams tag. To set a Streams tag, use the DBMS_STREAMS.SET_TAG procedure. If the capture process is configured to capture non-NULL tags, you will need to customize the rules so that the DDL commands issued when performing online backups are not captured. Force a LogMiner checkpoint at the source site before beginning any user-managed online backups. When you force a checkpoint, you request that a LogMiner checkpoint be taken at the next possible time. To force a checkpoint, explicitly reset the hidden capture parameter _CHECKPOINT_FORCE to a value of Y. There must be at least one open transaction for a checkpoint to be taken. When a checkpoint is taken, the DBA_CAPTURE columns for checkpointing (MAX_CHECKPOINT_SCN and REQUIRED_CHECKPOINT_SCN) are updated. Check these columns first.  At a minimum, increase the MAX_CHECKPOINT_SCN value. After setting a checkpoint, query the SYSTEM.LOGMNR_RESTART_CKPT$ view to verify that at least one additional ckpt_scn has been written. Oracle Database 11g: Implement Streams

584 Oracle Database 11g: Implement Streams 18 - 584
Best Practices for Performing Backups of Your Streams Environment (continued) The streams capture process requests a LogMiner checkpoint periodically while mining the redo (after processing every 10 MB of redo). A capture process aborts if it cannot access a needed archived redo log file, or if it cannot find the archived redo log file in its expected location. In releases later than , RMAN correctly manages the archived logs in a Streams configuration. Ensure that the apply processes are configured with COMMIT_SERIALIZATION set to FULL (default value). This setting instructs the apply process to commit applied transactions in the same order in which they were committed at the source database. If the parameter is set to NONE, the apply process may commit transactions in any order. Dependent transactions are always applied at the destination database in the same order in which they were committed at the source database. To ensure that the Streams metadata is maintained and the applied SCN of the DBA_CAPTURE view is updated periodically, implement a heartbeat table. A heartbeat table is a replicated table that is updated periodically (for example, every five minutes). For example, you can create a table that includes a date or time stamp column and then set up an automated job to update this table at regular intervals. The streams capture process requests a checkpoint after every 10 MB of generated redo. During the checkpoint, the metadata for streams is maintained if there are active transactions. Implementing a heartbeat table ensures that there are open transactions occurring regularly within the source database enabling additional opportunities for the metadata to be updated frequently. By implementing a heartbeat table, the database backups of your Streams databases will have the most up-to-date metadata as possible. Additionally, the heartbeat table can provide a quick method for determining the health of a Streams replication. Oracle Database 11g: Implement Streams

585 Point-in-Time Recovery Issues
After performing a recovery, verify that all Streams processes are enabled and started. To resynchronize a source database in a bidirectional Streams environment, you can: 1. Verify that the destination database has applied all the changes sent from the source database needing recovery 2. Remove the Streams configuration from the source database 3. At the destination database, drop the apply process that applies changes from the source database undergoing recovery 4. Follow the steps for adding a new source database to an existing Streams environment Recovery Issues Point-in-time recovery (PITR) is the recovery of a database to a specified noncurrent time, SCN, or log sequence number. As a result, the database is returned to a state that existed in the past, and the most recent transactions are lost. A multiple-source environment is one in which there is more than one source database for any of the shared data. If a source database in a multiple-source environment cannot be recovered to the current point in time, you can use the method described in this section to resynchronize the source database with the other source databases in the environment. For example, for a bidirectional Streams environment, assume that database A is the database that must be resynchronized and that database B is the other source database in the environment. To resynchronize database A in this bidirectional Streams environment, complete the following steps: 1. Verify that database B has applied all the changes sent from database A. You can query the V$BUFFERED_SUBSCRIBERS and the V$STREAMS_TRANSACTION views at database B to determine whether the apply process that applies these changes has any unapplied changes in its queue. Do not continue until all these changes have been applied. 2. Remove the Streams configuration from database A by running the REMOVE_STREAMS_CONFIGURATION procedure in the DBMS_STREAMS_ADM package. Oracle Database 11g: Implement Streams

586 Oracle Database 11g: Implement Streams 18 - 586
Recovery Issues (continued) 3. At database B, drop the apply process that applies changes from database A. Do not drop the rule sets used by this apply process because you will re-create the apply process in a subsequent step. 4. To add database A back into the Streams environment, complete the steps covered in the “Adding a New Database to a Multiple-Source Streams Environment” topic in the lesson titled “Extending the Streams Environment.” You can also use features of Streams to assist with the recovery of a failed site. The following slides discuss additional methods of recovering failed sites in a Streams environment. Oracle Database 11g: Implement Streams

587 Point-in-Time Recovery with Streams
Capture progress SITE1.NET PITR SCN Apply progress SITE2.NET SITE2.NET Start SCN Applied SCN Point-in-Time Recovery with Streams In a distributed environment such as Oracle Streams, if one site requires recovery to a past time, it is often necessary to recover all other sites in the environment to the same point in time to preserve global data consistency. Because Streams uses the same transactions at multiple sites and this information is recorded in the redo logs at each site, it is possible to use one site in a Streams environment to recover another site. Oracle Database 11g: Implement Streams

588 Recovering a Destination Database After PITR
Two methods of recapturing changes: Reset START_SCN for capture at the source site and resend the missing changes. Create a new capture process with a specific START_SCN, a new propagation at the source site, and an apply process at the recovered site. CP01 Job AP01 Job CP02 AP02 Recovering a Destination Database After PITR If PITR is required for a destination database in a Streams environment, you must reapply the captured changes that were applied after the time used for PITR. To accomplish this, you can: Alter the capture process at the source database of each apply process to start capturing changes at an earlier SCN. This SCN is the oldest SCN for the apply process at the recovered database. Configure a capture process to recapture these changes from the redo logs at the source database, a propagation to propagate these changes from the database where changes are recaptured to the recovered database, and an apply process at the recovered database to apply these changes Resetting the start SCN for the capture process is simpler than creating a new capture process. However, if the capture process captures changes that are applied at multiple destination databases, the recaptured changes are sent to all the destination databases, including the ones that did not perform PITR. If a change is already applied to a destination database, it is discarded by the apply process, but you may not want to resend the changes to multiple destination databases. In this case, you can create and temporarily use a new capture process and a new propagation that propagates changes to only the destination database that was recovered. Recovered site Existing source site Oracle Database 11g: Implement Streams

589 Oracle Database 11g: Implement Streams 18 - 589
Recovering a Destination Database After PITR (continued) If there are multiple apply processes at the destination database where you performed PITR, you must perform recovery for each apply process. Neither of these methods should be used if any of the following conditions are true regarding the destination database that you are recovering: A propagation propagates user-enqueued messages to the destination database. Both of these methods reapply only captured messages at the destination database, not user-enqueued messages. In a directed networks configuration, the destination database is used to propagate messages from a capture process to other databases, but the destination database does not apply messages from this capture process. The oldest message number for an apply process at the destination database is less than the first SCN of a capture process that captures changes for this apply process. The archived log files that contain the intended start SCN are no longer available. If any of these conditions are true in your environment, you cannot use the methods described here. Instead, you must manually resynchronize the data at all destination databases. Refer to the Oracle Streams Replication Administrator’s Guide for detailed information about how to: Reset the start SCN for the existing capture process that captures the changes that are applied at the destination database Create a new capture process to capture the changes that must be reapplied at the destination database Oracle Database 11g: Implement Streams

590 Recovering a Single-Source Database After PITR
1 AP01 AP02 3 CP01 2 Job 3 DBMS_CAPTURE_ADM.BUILD CP02 Recovered to SCN 9462 Greatest applied SCN 9516 1 Recovering a Single-Source Database After PITR If PITR is required for the source database in a single-source Streams environment, all capture processes that capture changes generated at the source database must be stopped. 1. If the destination database has applied transactions that occurred after the PITR SCN, you can attempt to recover as many transactions as possible at the source database by using transactions applied at the destination database. This method assumes that you can identify the transactions applied at the destination database by using a utility such as LogMiner or Oracle Flashback Query. The transactions are manually executed at the source database with a Streams tag set to prevent the recovery transactions from being recaptured by the capture process and resent to the destination database. 2. If the destination database has not applied any transactions that occurred after the PITR SCN, you can use the existing capture process to capture changes up to the PITR SCN. 3. After it completes, drop the existing capture and apply processes. Then you create a new apply process and a new capture process, using the FIRST_SCN generated by the postrecovery BUILD operation when creating the capture process. Then restart your Streams replication system. Refer to the Oracle Streams Replication Administrator’s Guide for detailed information about performing PITR in a single-source Streams replication environment. Greatest applied SCN 9460 2 Oracle Database 11g: Implement Streams

591 Oracle Database 11g: Implement Streams 18 - 591
Recovering a Source Database in a Multiple-Source Environment After PITR 1 AP01 CP01 2 CP02 Greatest applied SCN 9516 Recovered to SCN 9462 Recovering a Source Database in a Multiple-Source Environment After PITR When you use Oracle Streams, changes made to the database are read from the redo logs. If one site fails and can only be partially recovered, you can use the redo from another Streams source database to recover the shared objects at the failed site. To restore changes made to the recovered database after PITR, you configure a capture process to recapture these changes from the redo logs at the other source database, a propagation to propagate these changes from the database where changes are recaptured to the recovered database, and an apply process at the recovered database to apply these changes. For this recovery method to work, the Streams environment must be designed correctly before any sites fail: For a shared object to be recoverable, a copy of the object must exist at a different Streams database that is running a capture process and is capturing changes for the shared objects that you want to recover. If there are multiple apply processes at the above capture site, each apply process must use a different non-NULL apply tag so that the changes originating from a particular source database can be identified. Remine redo logs. Oracle Database 11g: Implement Streams

592 Oracle Database 11g: Implement Streams 18 - 592
Recovering a Source Database in a Multiple-Source Environment After PITR (continued) The following SCN values are required to restore lost changes to the recovered database. Point-in-time SCN: The SCN for PITR at the recovered database Instantiation SCN: The SCN value to which the instantiation SCN must be set for each database object involved in the recovery at the recovered database while changes are being reapplied. At the other source database, this SCN value corresponds to one less than the commit SCN of the first transaction that was applied at the other source database and lost at the recovered database. Start SCN: The SCN value to which the start SCN is set for the capture process that is created to recapture changes at the other source database. This SCN value corresponds to the earliest SCN at which the apply process at the other source database started applying a transaction that was lost at the recovered database. This capture process may be a local or downstream capture process that uses the other source database for its source database. Maximum SCN: The SCN value to which the maximum_scn parameter for the capture process created to recapture lost changes must be set. The capture process stops capturing changes when it reaches this SCN value. The current SCN for the other source database is used for this value. You must record the point-in-time SCN when you perform PITR on the recovered database. You can use the GET_SCN_MAPPING procedure in the DBMS_STREAMS_ADM package to determine the other necessary SCN values. For example: SET SERVEROUTPUT ON DECLARE dest_scn NUMBER; start_scn NUMBER; dest_skip DBMS_UTILITY.NAME_ARRAY; BEGIN DBMS_STREAMS_ADM.GET_SCN_MAPPING( apply_name => 'apply_site1_lcrs', src_pit_scn => '9462', dest_instantiation_scn => dest_scn, dest_start_scn => start_scn, dest_skip_txn_ids => dest_skip); IF dest_skip.count = 0 THEN DBMS_OUTPUT.PUT_LINE('No Skipped Transactions'); DBMS_OUTPUT.PUT_LINE('Destination SCN: ' || dest_scn); ELSE DBMS_OUTPUT.PUT_LINE('Destination SCN invalid for Flashback Query.'); DBMS_OUTPUT.PUT_LINE('At least 1 trx. was skipped.'); END IF; END; / For more information about the steps to perform when recovering a source database in a multisource environment, refer to the Oracle Streams Replication Administrator’s Guide. Oracle Database 11g: Implement Streams

593 Upgrading Streams to Oracle Database 11g
Reconfigure database initialization parameters. Grant DBA role to the Streams administrator. Update enqueuing procedures if necessary. Query the DBA_STREAMS_NEWLY_SUPPORTED and DBA_STREAMS_UNSUPPORTED views. Revisit complex rules. Rewrite them to use negative rules or the AND condition (if possible). Upgrading Streams to Oracle Database 11g If you want to upgrade your entire Oracle Database 10g, Release 2 Streams environment to Oracle Database 11g, follow the database upgrade as outlined in the Oracle Database Upgrade Guide 11g. Additionally, there are some steps you must perform to ensure that Streams functions properly on Oracle Database 11g, and to get the best performance out of your Streams environment: Set the STREAMS_POOL_SIZE initialization parameter to an appropriate value for your system. You can then reduce the value of the SHARED_POOL_SIZE parameter to release the amount of memory you were reserving for the buffer queue storage. Eliminate the AQ_TM_PROCESSES parameter from your database initialization parameter file. This parameter must not be set for an Oracle Database 10g instance so that the queue management processes can be managed automatically. You must not set this parameter explicitly to 0. When upgrading from earlier releases, drop any propagations and re-create them specifying the queue_to_queue parameter as TRUE. Oracle Database 11g: Implement Streams

594 Oracle Database 11g: Implement Streams 18 - 594
Upgrading Streams to Oracle Database 11g (continued) The DBA role is required by Streams administrator users. Ensure that this role is directly granted to the Streams administrator users on your system. New features implemented in Oracle Database 11g may require you to change your enqueuing procedures, or may make them unnecessary. Here are some examples: User-enqueued messages cannot be enqueued into a staging queue unless a subscriber exists for the message. For configurations that use an apply handler to reenqueue a captured message as a user-enqueued message, this can now be done without an apply handler using the SET_ENQUEUE_DESTINATION procedure of DBMS_APPLY_ADM. Some tables, such as materialized view logs, are automatically skipped and do not require negative rules or the complex rules. To determine whether an object is automatically skipped by Streams clients, query the DBA_STREAMS_UNSUPPORTED view and check the value of the AUTO_FILTERED column. This view also lists the reason why the database objects are not supported in the current release. The DBA_STREAMS_NEWLY_SUPPORTED view displays information about all tables in the database that are newly supported by Streams running in Oracle Database 11g. This view also lists the reasons why the database objects were not supported in the previous release. If apply handlers or custom transformations are used to transform schema or table names, consider the use of declarative transformations. If you have schema or global rules that include NOT clauses to eliminate individual tables or schemas, you must re-create these rules as simple positive and negative rules to improve the performance of rule evaluations. Oracle Database 11g: Implement Streams

595 Configuring Streams to Operate Between Different Versions
If you want to include both an Oracle Database 10g site and an Oracle Database 11g site in your Streams environment, note the following: Capabilities at each site depend on the limitations of the earlier release. Relevant for earlier versions: If the Oracle Database 9i site is to be a source site, you must configure flow control. A downstream capture process cannot be created on an Oracle9i Database, Release 2 site. Configuring Streams to Operate Between Different Versions Oracle Streams can interoperate between Oracle Database 11g and Oracle Database 10g. For example, a capture process on an Oracle Database 10g database can propagate messages and LCRs to an Oracle Database 11g database, where they can be applied by an apply process configured by Oracle Database 11g. Similarly, changes captured on an Oracle Database 11g database can be propagated to an Oracle Database 10g and applied by an apply process there. Relevant for Earlier Versions When interoperating between two different releases, remember that the earlier release restrictions are still enforced. For example, Streams in Oracle Database 10g supports the replication of LONG data types, but Oracle9i Database Release 2 does not. Replicating a table with a LONG data type from Oracle Database 10g to Oracle9i Database, Release 2 will result in an error on the Oracle9i Database, Release 2 site. In addition, the Oracle 9.2 manual flow control is automatic in Oracle Database 10g and integrated into the capture process architecture. (See MetaLink ( ) for 9.2 manual flow control.) In Oracle9i Database, Release 2, however, flow control must be manually configured. If you are using Oracle Streams between two different database versions, and the Oracle9i Database, Release 2 site is the source site, you must continue to use the manual flow control procedures. You do not need to configure flow control on the Oracle9i database if it is only a destination site. Oracle Database 11g: Implement Streams

596 Oracle Database 11g: Implement Streams 18 - 596
Configuring Streams to Operate Between Different Versions (continued) Downstream capture (the ability to capture changes for a source database at an alternative database) cannot be used with Oracle9i Database, Release 2 databases. A downstream capture process cannot be created for an Oracle 9i Database, Release 2 site. That is, you cannot create a downstream capture in 9.2 and you cannot create a 10g downstream capture process for a 9.2 database. Oracle Database 11g: Implement Streams

597 Oracle Database 11g: Implement Streams 18 - 597
Summary In this lesson, you should have learned how to: Implement best practices for configuring a Streams environment Identify key operational tasks for maintaining a Streams environment Identify and follow best practices for captured messages Use triggers in a Streams environment Identify and follow best practices for rules Describe backup and recovery issues for a Streams environment and follow best practices with the use of RMAN Describe interoperability issues Oracle Database 11g: Implement Streams

598 Monitoring Oracle Streams

599 Oracle Database 11g: Implement Streams 18 - 599
Objectives After completing this lesson, you should be able to: Describe and use the tools that are available for monitoring Streams Monitor LCR messages Monitor statistics for capture propagation and apply Respond to Oracle Streams alerts Check the trace files and alert log for problems Use Streams monitoring tools, including: Oracle Streams Performance Advisor UTL_SPADV package Healthcheck Objectives See Appendix D (“Common Streams Error Messages”) for information about troubleshooting. Oracle Database 11g: Implement Streams

600 Oracle Database 11g: Implement Streams 18 - 600
Monitoring Tools For configuration issues:   Tracking messages Checking alert logs Querying Streams data dictionary views  Performing Streams Healthcheck For performance issues: Obtaining Streams Healthcheck and AWR and ASH reports of the same time period Correlating the Streams activity with UTL_SPADV (or the STRMMON default output)        Monitoring Tools If you encounter configuration issues, you can track messages via the SET_MESSAGE_TRACKING procedure and the V$STREAMS_MESSAGE_TRACKING view. Then you check the alert logs and dictionary views. Finally, you can run the Streams Healthcheck. To fully analyze your Streams configuration, it is essential to get the Healthcheck report from all participating databases. Note: The Streams Healthcheck is a SQL script available at MetaLink Download the script that matches your databases version. If you encounter performance issues, you can begin your analysis with the Streams Healthcheck and checking the alert logs. In addition to the Healthcheck output, you must obtain the AWR and ASH reports of the same time period for all Streams databases. To correlate the Streams activities in the database, you can use the UTL_SPADV package (stored in $ORACLE_HOME/rdbms/admin) or, for database versions earlier than 11g R1, use the STRMMON default output (not the -long output). The UTL_SPADV package provides subprograms to collect and analyze statistics for the Oracle Streams components in a distributed database environment. The package uses the Oracle Streams Performance Advisor to gather statistics. Oracle Database 11g: Implement Streams

601 Monitoring LCR Messages
Monitor the path of LCRs through Streams processing using: DBMS_STREAMS_ADM.SET_MESSAGE_TRACKING Set the tracking label for logical change records generated by a database session. V$STREAMS_MESSAGE_TRACKING Query the dictionary view to track the LCRs through the stream. DBMS_STREAMS_ADM.SET_MESSAGE_TRACKING( tracking_label IN VARCHAR2 DEFAULT 'Streams_tracking', actions IN NUMBER DEFAULT DBMS_STREAMS_ADM.ACTION_MEMORY); Apply Propagate Capture Monitoring LCR Messages LCR tracking is useful if LCRs are not being applied as expected by one or more apply processes. When this happens, you can use LCR tracking to determine where the LCRs are stopping in the stream and address the problem at that location. The SET_MESSAGE_TRACKING procedure lets you specify a tracking label for logical change records (LCRs) generated by a database session. tracking_label: The label used to track the LCRs. (It does not track non-LCR messages.) Set this parameter to NULL to stop message tracking in the current session. actions: DBMS_STREAMS_ADM.ACTION_MEMORY: The LCRs are tracked in memory, and the V$STREAMS_MESSAGE_TRACKING dynamic performance view is populated with information about the LCRs. DBMS_STREAMS_ADM.ACTION_TRACE: The LCRs are tracked in the trace files of each database that processes the LCRs. Oracle Database 11g: Implement Streams

602 Monitoring and Managing the Flow of LCRs for CCA
When CCA optimization is in use, you can do the following: Modify the capture rule set. Modify the propagation rule sets to control the LCRs sent. Stop or unschedule the propagation to stop the LCR flow. Start or schedule the propagation to start the LCR flow. Monitoring and Managing the Flow of LCRs for CCA When combined capture and apply (CCA) is in use, LCRs are transmitted directly from the capture process to the apply process via a database link. In this mode, the capture does not stage the LCRs in a queue or use queue propagation to deliver them. The capture process uses the propagation rule sets to determine which LCRs to send to the apply process with the DBMS_PROPAGATION_ADM package. You can modify the propagation rule sets to control which LCRs are sent. In addition, rule-based transformations that are configured for the rules in the positive propagation rule set are run when a rule evaluates to TRUE. To stop the flow of LCRs, either stop or unschedule the propagation. To start the flow of LCRs, either start or schedule the propagation. However, changes to other parameters in the propagation schedule are ignored, and no statistics are gathered for the propagation. In addition, the following views contain no statistics when CCA is used: DBA_QUEUE_SCHEDULES USER_QUEUE_SCHEDULES V$PROPAGATION_RECEIVER V$PROPAGATION_SENDER V$BUFFERED_PUBLISHERS V$BUFFERED_SUBSCRIBERS Oracle Database 11g: Implement Streams

603 Oracle Database 11g: Implement Streams 18 - 603
Managing the Flow of LCRs for CCA (continued) When CCA is used, you can manage capture processes and apply processes normally. Specifically, you control capture process and apply process behavior in the following ways: Changes must satisfy the capture and propagation rule sets to be captured by the capture process. LCRs must satisfy the apply process rule sets to be applied by the apply process. Rule-based transformations that are configured for the rules in the positive rule set of a capture process, propagation, or apply process are run when a rule evaluates to TRUE. LCRs are sent to apply handlers for an apply process when appropriate. Update conflict resolution handlers must be invoked during apply when appropriate. Oracle Database 11g: Implement Streams

604 Monitoring Capture Statistics
The V$STREAMS_CAPTURE view displays the following information about the capture processes: State of the capture process Total number of captured messages Total number of enqueued messages Creation time of the most recent message Creation time of the last-enqueued message Time spent capturing changes Time spent evaluating rules Time spent enqueuing messages Time spent creating LCRs Monitoring Capture Statistics To view the capture-process statistics, use the V$STREAMS_CAPTURE view. For example, to display general information for a capture process named CAPTURE, run the following query: SELECT SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME, c.SID, c.SERIAL#, c.STATE, c.TOTAL_MESSAGES_CAPTURED, c.TOTAL_MESSAGES_ENQUEUED FROM V$STREAMS_CAPTURE c, V$SESSION s WHERE c.CAPTURE_NAME = 'CAPTURE1' AND c.SID = s.SID AND c.SERIAL# = s.SERIAL#; With formatting, the output looks similar to the following: Capture Session Total Process Session Serial Redo Entries LCRs Number ID Number State Scanned Enqueued C CAPTURING CHANGES You can also view the capture process statistics in Enterprise Manager by clicking the name of the capture process that you want to monitor. Oracle Database 11g: Implement Streams

605 Monitoring Propagation Statistics with EM
You can also use Enterprise Manager to monitor Streams propagation. Some of the information displayed is the following: Number in State WAITING: A transaction tried to dequeue messages in the WAITING state, but the dequeue failed. These messages are returned to the READY state after a retry delay interval. Number in State READY: Messages in the READY state are ready to be propagated or dequeued. Number in State EXPIRED: Messages are in the EXPIRED state if one or more consumers did not dequeue the message before the expiration time. When a message has expired, it is moved to an exception queue. Total Wait Time in Seconds: Total number of seconds that dequeue operations have waited to dequeue messages. A dequeue operation may wait if no message in the queue matches its search criteria. Average Wait Time in Seconds: Average number of seconds that a dequeue operation has waited to dequeue a message Buffered Queue-Message Count: Total number of messages in the buffered queue Buffered Queue-Spilled Messages: Number of messages in the buffered queue that have spilled from the memory into a persistent queue table. Note: These statistics are displayed as 0 if CCA is active. Monitoring Propagation Statistics with EM Oracle Database 11g: Implement Streams

606 Monitoring Apply Statistics
GV$STREAMS_APPLY_COORDINATOR GV$STREAMS_APPLY_READER GV$STREAMS_APPLY_SERVER GV$STREAMS_TRANSACTION GV$BUFFERED_SUBSCRIBERS DBA_HIST_STREAMS_APPLY_SUM Monitoring Apply Statistics You can use the following dynamic data dictionary views to obtain statistics for the apply processes: GV$STREAMS_APPLY_COORDINATOR monitors the overall performance of the apply coordinator process, which spawns and manages the apply servers. Using this view, you can determine: Session identifier and serial number of the coordinator’s session Apply process number (the digits in the process name annn) Current state of the coordinator (INITIALIZING, APPLYING, SHUTTING DOWN CLEANLY, or ABORTING) Total number of transactions received and successfully applied by the coordinator process since the apply process was last started Number of transactions applied by the apply process that resulted in an apply error since the apply process was last started Total number of transactions received by the coordinator process but ignored because the apply process had already applied the transactions since the apply process was last started Oracle Database 11g: Implement Streams

607 Oracle Database 11g: Implement Streams 18 - 607
Monitoring Apply Statistics (continued) GV$STREAMS_APPLY_READER displays information about each apply reader. The apply reader for an apply process is a process that reads (dequeues) messages from the queue, computes message dependencies, builds transactions, and passes the transactions on to the apply process coordinator in commit order for assignment to the apply servers. Using this view, you can determine: Types of messages that are dequeued by the reader server (either captured LCRs or user-enqueued messages) Name of the parallel execution server that is used by the reader server Current state of the reader server (IDLE, DEQUEUE MESSAGES, or SCHEDULE MESSAGES) Amount (in bytes) of System Global Area (SGA) memory used by the apply process Cumulative total number of messages dequeued by the reader server Message creation time: For captured messages, the message creation time is the time when the DML or DDL change generated the redo information at the source database for the message. Latency: For captured messages, the latency is the amount of time between when the message was created at a source database and when the message was dequeued by the apply process. PROXY_SID: When the apply process uses CCA, this column shows the session ID of the apply process network receiver that is responsible for direct communication between capture and apply. If the apply process does not use CCA, the value in this column is 0. PROXY_SERIAL: When the apply process uses CCA, this column shows the serial number of the apply process network receiver that is responsible for direct communication between capture and apply. If the apply process does not use CCA, the value in this column is 0. PROXY_SPID: When the apply process uses CCA, this column shows the process identification number of the apply process network receiver that is responsible for direct communication between capture and apply. If the apply process does not use CCA, the value in this column is 0. CAPTURE_BYTES_RECEIVED: When the apply process uses CCA, this column shows the number of bytes received by the apply process from the capture process since the apply process last started. If the apply process does not use CCA, this column is not populated. Note: The last four parameters are combined capture and apply (CCA) statistics. Oracle Database 11g: Implement Streams

608 Oracle Database 11g: Implement Streams 18 - 608
Monitoring Apply Statistics (continued) GV$STREAMS_APPLY_SERVER displays information about each apply server and its activities. For each message received from the apply coordinator, an apply server either applies the message or sends the message to the appropriate apply handler. Using this view, you can determine: Name of the apply process Process names of the parallel execution servers (in order) Effective apply parallelism for an apply process Current state of each apply server (IDLE, RECORD LOW-WATERMARK, ADD PARTITION, DROP PARTITION, EXECUTE TRANSACTION, WAIT COMMIT, WAIT DEPENDENCY, WAIT FOR NEXT CHUNK, TRANSACTION CLEANUP, or INITIALIZING, SPILLING, PAUSED) Total number of transactions assigned to each apply server since the last time the apply process was started. A transaction may contain multiple messages. Total number of messages applied by each apply server since the last time the apply process was started total_messages_spilled and oldest_message_number Transaction ID of the oldest message in the queue V$STREAMS_TRANSACTION provides information about transactions that are being processed by a Streams capture process or apply process. This view can be used to identify long-running transactions and to determine how many LCRs are being processed in each transaction. This view contains information only about the captured LCRs. It does not contain information about the user-enqueued LCRs or user messages. V$BUFFERED_SUBSCRIBERS displays information about subscribers for all buffered queues in the instance. There is one row per subscriber per queue. The DBA_HIST_STREAMS_APPLY_SUM view displays information about each apply process and its activities. It contains a snapshot of information that can be found in the V$STREAMS_APPLY_COORDINATOR, V$STREAMS_APPLY_READER, and V$STREAMS_APPLY_COORDINATOR views. The DBA_HIST_STREAMS_APPLY_SUM view is intended for use with the Automatic Workload Repository (AWR). You can also use the Enterprise Manager (EM) Database Control console to monitor the apply process for a database. The View Apply Statistics page displays statistics about an apply process. There are four tabs for viewing statistics related to the queue, the apply reader process, the apply coordinator process, and the apply server process. Note: The information displayed by EM comes from the GV$STREAM_APPLY_COORDINATOR view. Oracle Database 11g: Implement Streams

609 Responding to Automated Alerts in EM
View the alert section on the Enterprise Manager home page: Alert Description Type Capture Aborts Alert Critical error: Capture, dependent replication, and redo log scanning stop Stateless Propagation Aborts Alert Critical error: Propagation and dependent replication stop; growing source queue Apply Aborts Alert Critical error: Apply and dependent replication stop; growing destination queue; potentially other growing source queues Apply Error Alert Error during apply transaction: Moving messages to a growing error queue Oracle Streams Pool Alert Automatically set by alert infrastructure Stateful Responding to Automated Alerts in Enterprise Manager An alert is a warning about a potential problem or an indication that a critical threshold has been crossed. There are two types of alerts. Stateless: Indicates single messages that are not necessarily tied to the system state. For example, an alert that indicates that a capture aborted with a specific error is a stateless alert. Stateful: Is associated with a specific system state. Stateful alerts are usually based on a numeric value, with thresholds defined at warning and critical levels. For example, an alert on the current Oracle Streams pool memory usage percentage, with the warning level at 85% and the critical level at 95%, is a stateful alert. The DBA_OUTSTANDING_ALERTS view records current stateful alerts. The DBA_ALERT_HISTORY view records stateless alerts and stateful alerts that have been cleared. If you monitor your Oracle Streams environment regularly and address problems as they arise, you may not need to monitor Oracle Streams alerts. Oracle Streams alerts themselves are informational, but the errors to which they point can be critical. Best practice tip: Monitor the Streams environment regularly. Oracle Database 11g: Implement Streams

610 Oracle Database 11g: Implement Streams 18 - 610
Responding to Automated Alerts in Enterprise Manager (continued) Capture Aborts Alert (STREAMS capture process capture_name aborted with ORA-error_number) indicates a critical error. The capture process stops. Any dependent replication stops. Also, the capture process makes no further progress in scanning the redo log until it is restarted. 1. To view the error, either navigate to Enterprise Manager > Data Movement > Manage (in the Streams section) > Capture tabbed page or query the DBA_CAPTURE view in SQL*Plus. 2. Take the appropriate corrective action. 3. Restart the capture process, either by going to the Capture tabbed page in EM or by executing the START_CAPTURE procedure in the DBMS_CAPTURE_ADM package. Propagation Aborts Alert (STREAMS propagation process source_queue, destination_queue, database_link aborted after 16 failures) indicates a critical error. The propagation process stops. Any dependent replication stops. Messages that are normally sent from one queue to another by the propagation remain in a growing source queue. Eventually, Oracle Streams performance is degraded when messages spill to disk. 1. To view the error, either navigate to Enterprise Manager > Data Movement > Manage (in the Streams section) > Propagation tabbed page or query the DBA_QUEUE_SCHEDULES view in SQL*Plus. 3. Restart the propagation process, either by going to the Propagation tabbed page in EM or by executing the START_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package. Apply Aborts Alert (STREAMS apply process apply_name aborted with ORA-error_number) indicates a critical error. The apply process stops. Any dependent replication stops. Messages that are normally dequeued by the apply process remain in the growing apply process queue. Other queues that send messages to the apply process queue may also grow and spill messages to disk. 1. To view the error, either navigate to Enterprise Manager > Data Movement > Manage (in the Streams section) > Apply tabbed page or query the DBA_APPLY view in SQL*Plus. 3. Restart the apply process, either by going to the Apply tabbed page in EM or by executing the START_APPLY procedure in the DBMS_APPLY_ADM package. Apply Error Alert (STREAMS error queue for apply process apply_name contains new transaction with ORA-error_number) indicates an error in applying a transaction. The apply process moves all the messages in the transaction to the error queue. Dependent transactions may also result in apply errors, and the error queue may grow quickly. You must resolve the apply errors as soon as possible. Most likely, you need to correct the conditions in database objects that caused the error, and then retry or delete the apply error transactions. Additional details are provided later in this lesson about working with apply errors. Oracle Streams Pool Alert is generated when memory usage of the Oracle Streams pool exceeds the percentage specified by the STREAMS_POOL_USED_PCT metric. This alert can be raised only if the database is not using Automatic Memory Management or Automatic Shared Memory Management. Oracle Database 11g: Implement Streams

611 Checking the Trace Files and Alert Log
Source queue LCR User msg . . . Destination Propagate messages Propagate messages queue LCR User msg . . . WRITE_ALERT_LOG='y' (default) alert log contains reasons. Set the trace_level parameter under support guidance. Check for Streams messages in database trace files: Capture processes: CP00…CPnn LogMiner reader and builder: MS00...MSnn Apply processes: AP00…APnn Apply reader and apply servers: AS00…ASnn Enqueue LCRs Capture process amer_CP01 Dequeue LCRs Capture changes Apply process euro_AP01 Redo log Log changes Apply changes Database objects Database objects User changes Checking the Trace Files and Alert Log Messages about each capture, propagation, and apply process are recorded in trace files for the database in which the process or propagation job is running. A local capture process runs on a source database, a downstream capture process runs on a downstream database, a propagation job runs on the database containing the source queue in the propagation, and an apply process runs on a destination database. These trace file messages can help you identify and resolve problems in an Oracle Streams environment. All trace files for background processes are written to the Automatic Diagnostic Repository (ADR). The names of trace files are operating-system specific, but each file usually includes the name of the process writing the file. You can find the trace files in the location specified by the BACKGROUND_DUMP_DEST parameter. Keep the WRITE_ALERT_LOG parameter for the capture and apply processes set to the default value y. Then the alert log for the database contains messages about why the capture process or apply process stopped. You can control the information in the trace files by setting the trace_level parameter with the SET_PARAMETER procedure (in the DBMS_CAPTURE_ADM and DBMS_APPLY_ADM packages) for capture or apply processes. It is recommended that the trace level be set only under the guidance of Oracle Support Services. User Oracle Database 11g: Implement Streams

612 Oracle Database 11g: Implement Streams 18 - 612
Checking the Trace Files and Alert Log (continued) Use the following checklist: Does a Capture Process Trace File Contain Messages About Capture Problems? A capture process is an Oracle background process named CPnn, where nn can include letters and numbers. For example, if the system identifier for a database running a capture process is amer and the capture process number is 01, the trace file for the capture process starts with amer_CP01. Do the Trace Files Related to Propagation Jobs Contain Messages About Problems? Each propagation uses a propagation job that depends on one or more slave processes named jnnn, where nnn is the slave process number. You can check the process name by querying the PROCESS_NAME column in the DBA_QUEUE_SCHEDULES data dictionary view. Does an Apply Process Trace File Contain Messages About Apply Problems? An apply process is an Oracle background process named APnn, where nn can include letters and numbers. For example, you may have a trace file for the apply process starting with euro_AP01. An apply process also uses other processes. Information about an apply process may be recorded in the trace file for one or more of these processes. The process name of the reader server and apply servers is ASnn. So you could have a trace file that contains information about a process used by an apply process starting with euro_AS01. Oracle Database 11g: Implement Streams

613 Handling Unavailable Destinations
Apply Propagation Apply Capture Queue spilling to hard disk, degrading performance Apply Unavailable One capture database Multiple destinations Checking V$BUFFERED_QUEUES, DBA_PROPAGATION, and V$PROPAGATION_SENDER views Potential action: 1. SPLIT_STREAMS 2. Resolve the destination problem. 3. MERGE_STREAMS_JOB Handling Unavailable Destinations Suppose that you have a database with one capture process that captures changes for multiple destination databases. One of the destination databases becomes unavailable. The changes for the unavailable destination cannot be propagated. These changes can build up the capture process queue and eventually spill to hard disk. Spilling messages to hard disk at the capture database can degrade the performance of the Oracle Streams replication environment. You can query the V$BUFFERED_QUEUES view to check the number of messages in a queue and the number that have spilled to hard disk. Query the DBA_PROPAGATION and V$PROPAGATION_SENDER views for propagations and the status of each propagation. You can address the “Unavailable destination” problem with the SPLIT_STREAMS and MERGE_STREAMS_JOB procedures in the DBMS_STREAMS_ADM package. The SPLIT_STREAMS procedure splits the problem stream off from the other streams flowing from the capture process. By splitting the stream off, you can avoid performance problems while the destination is unavailable. Resolve the problem at the destination database. After the problem is resolved, the MERGE_STREAMS_JOB procedure determines whether the original capture process and the cloned capture are within the specified merge threshold. If they are, the MERGE_STREAMS procedure is called to merge the stream back with the other streams flowing from the capture process. For details, see the lesson titled “Administering a Streams Environment.” Oracle Database 11g: Implement Streams

614 Oracle Streams Performance Advisor
Consisting of: DBMS_STREAMS_ADVISOR_ADM PL/SQL package DBA_STREAMS_TP_* data dictionary views Gathering information with a realistic workload: Taking snapshot of statistics Comparing snapshot with previous one 5 sec 10 min 15 min Snapshot 1 2 3 4 Oracle Streams Performance Advisor The Oracle Streams Performance Advisor consists of the DBMS_STREAMS_ADVISOR_ADM PL/SQL package and a collection of data dictionary views, beginning with DBA_STREAMS_TP_. You can use the ANALYZE_CURRENT_PERFORMANCE procedure in the DBMS_STREAMS_ADVISOR_ADM package to gather information about the Oracle Streams topology and the performance of the Oracle Streams components in the topology. Here is an example of how the Oracle Streams performance advisor gathers information in a session: First execution of advisor 1. Take snapshot of the statistics. 2. Wait at least five seconds and then take another snapshot of the statistics. 3. Compare data from the first snapshot with data from the second snapshot to calculate performance statistics. Second execution of advisor 4. Again wait some time and take a snapshot of the statistics. 5. Compare data from the last snapshot in advisor run 1 with the snapshot taken in advisor run 2 to calculate performance statistics. and so on Oracle Database 11g: Implement Streams

615 Viewing the Oracle Streams Topology
Execute the following in the same session: 1. Connect as Streams Administrator. 2. Analyze current performance (optionally, several times). 3. Identify Advisor run ID. 4. Query current performance statistics. Note: Execute the ANALYZE_CURRENT_PERFORMANCE procedure whenever you add or remove Streams components. exec DBMS_STREAMS_ADVISOR_ADM.ANALYZE_CURRENT_PERFORMANCE; SELECT DISTINCT ADVISOR_RUN_ID FROM DBA_STREAMS_TP_COMPONENT_STAT ORDER BY ADVISOR_RUN_ID; Viewing the Oracle Streams Topology The Oracle Streams topology is a representation of the databases in an Oracle Streams environment with their Streams components and the flow of messages between these components. Currently, the Oracle Streams topology gathers information about a stream path only if the stream path ends with an apply process. To gather information about the Oracle Streams topology and Oracle Streams performance: 1. In SQL*Plus, connect as Streams Administrator to the database that you want to analyze. 2. Execute the ANALYZE_CURRENT_PERFORMANCE procedure in the DBMS_STREAMS_ADVISOR_ADM package. Optionally, execute this procedure several times. 3. Execute the query (as shown in the slide) to identify the advisor run ID for the information gathered in the previous step. Your output should be similar to the following: ADVISOR_RUN_ID 1 2 The Oracle Streams Performance Advisor assigns an advisor run ID to the statistics for each run. Use the last value in the output for the advisor run ID. In this example, use 2 for the advisor run ID in the queries. The Oracle Streams Performance Advisor purges some of the performance statistics that it gathered when a user session ends. Oracle Database 11g: Implement Streams

616 Oracle Database 11g: Implement Streams 18 - 616
Viewing the Oracle Streams Topology (continued) Therefore, you should execute the performance statistics queries in the same session as the ANALYZE_CURRENT_PERFORMANCE procedure. Complete these steps whenever you want to monitor the current performance of your Oracle Streams environment. Note: You should also run the ANALYZE_CURRENT_PERFORMANCE procedure when new Oracle Streams components are added to any database in the Oracle Streams environment. Executing the procedure updates the Oracle Streams topology with information about new components. Oracle Database 11g: Implement Streams

617 Viewing the Oracle Streams Topology
Viewing the databases in the Oracle Streams environment: Viewing the Oracle Streams components at each database: Viewing each Stream path in an Oracle Streams topology: SELECT GLOBAL_NAME, LAST_QUERIED, VERSION, COMPATIBILITY, MANAGEMENT_PACK_ACCESS FROM DBA_STREAMS_TP_DATABASE; Viewing the Oracle Streams components at each database: SELECT COMPONENT_ID, COMPONENT_NAME, COMPONENT_TYPE, COMPONENT_DB FROM DBA_STREAMS_TP_COMPONENT ORDER BY COMPONENT_ID; Viewing each Stream path in an Oracle Streams topology: SELECT PATH_ID,SOURCE_COMPONENT_ID,SOURCE_COMPONENT_NAME, DESTINATION_COMPONENT_ID, DESTINATION_COMPONENT_NAME, POSITION, ACTIVE FROM DBA_STREAMS_TP_COMPONENT_LINK ORDER BY PATH_ID, POSITION; Viewing the Oracle Streams Topology (continued) Viewing the databases in the Oracle Streams environment: COLUMN GLOBAL_NAME HEADING 'Global Name' FORMAT A12 COLUMN LAST_QUERIED HEADING 'L_Queried' COLUMN VERSION HEADING 'Version' FORMAT A10 COLUMN COMPATIBILITY HEADING 'Compatibility' FORMAT A13 COLUMN MANAGEMENT_PACK_ACCESS HEADING 'Management Pack' FORMAT A18 SELECT GLOBAL_NAME, LAST_QUERIED, VERSION, COMPATIBILITY, MANAGEMENT_PACK_ACCESS FROM DBA_STREAMS_TP_DATABASE; Global Name L_Queried Version Compatibility Management Pack HUB.NET JUN DIAGNOSTIC+TUNING SPOKE1.NET 15-JUN DIAGNOSTIC+TUNING SPOKE2.NET 15-JUN DIAGNOSTIC+TUNING Oracle Database 11g: Implement Streams

618 Oracle Database 11g: Implement Streams 18 - 618
Viewing the Oracle Streams Topology (continued) Viewing the Oracle Streams components at each database COLUMN COMPONENT_ID HEADING 'ID' FORMAT 99 COLUMN COMPONENT_NAME HEADING 'Name' FORMAT A36 COLUMN COMPONENT_TYPE HEADING 'Type' FORMAT A22 COLUMN COMPONENT_DB HEADING 'DB' FORMAT A7 SELECT COMPONENT_ID, COMPONENT_NAME, COMPONENT_TYPE, COMPONENT_DB FROM DBA_STREAMS_TP_COMPONENT ORDER BY COMPONENT_ID; ID Name Type DB 1 "STRMADMIN"."DESTINATION_SPOKE1" QUEUE HUB.NET 2 "STRMADMIN"."DESTINATION_SPOKE2" QUEUE HUB.NET 3 "STRMADMIN"."SOURCE_HNS" QUEUE HUB.NET 4 "STRMADMIN"."SOURCE_HNS"=>SPOKE1.NET PROPAGATION SENDER HUB.NET . . . 22 HUB.NET=>"STRMADMIN"."DESTINATION_SP PROPAGATION RECEIVER SPOKE2. OKE2" NET Viewing each Stream path in an Oracle Streams topology COLUMN PATH_ID HEADING 'Path|ID' FORMAT 9999 COLUMN SOURCE_COMPONENT_ID HEADING 'Source|ID' FORMAT 9999 COLUMN SOURCE_COMPONENT_NAME HEADING 'S_Comp|Name' FORMAT A20 COLUMN DESTINATION_COMPONENT_ID HEADING 'Dest|ID' FORMAT 9999 COLUMN DESTINATION_COMPONENT_NAME HEADING 'D_Comp|Name' FORMAT A15 COLUMN POSITION HEADING 'Pos' FORMAT 999 COLUMN ACTIVE HEADING 'Act?' FORMAT A7 SELECT PATH_ID, SOURCE_COMPONENT_ID, SOURCE_COMPONENT_NAME, DESTINATION_COMPONENT_ID, DESTINATION_COMPONENT_NAME, POSITION, ACTIVE FROM DBA_STREAMS_TP_COMPONENT_LINK ORDER BY PATH_ID, POSITION; Path Source S_Comp Dest D_Comp ID ID Name ID Name Pos Act? CAPTURE_HNS "STRMADMIN"."SO YES URCE_HNS" "STRMADMIN"."SOURCE_ "STRMADMIN"."SO YES HNS" URCE_HNS"=>SPOKE1.NET "STRMADMIN"."SOURCE_ HUB.NET=>"STRMA YES HNS"=>SPOKE1.NET DMIN"."DESTINATION_SPOKE1" HUB.NET=>"STRMADMIN" "STRMADMIN"."DE YES ."DESTINATION_SPOKE STINATION_SPOKE" 1" "STRMADMIN"."DESTINA APPLY_SPOKE YES TION_SPOKE1" CAPTURE_HNS "STRMADMIN"."SO YES Oracle Database 11g: Implement Streams

619 Viewing the Oracle Streams Topology
Viewing the Oracle Streams Topology (continued) The screenshot in the slide shows an example of a Streams topology, which is part of the Streams Healthcheck report. Oracle Database 11g: Implement Streams

620 Viewing Performance Statistics for Oracle Streams Components
Checking for bottleneck components in the Oracle Streams topology SELECT PATH_ID, COMPONENT_ID, COMPONENT_NAME, COMPONENT_TYPE, COMPONENT_DB FROM DBA_STREAMS_TP_PATH_BOTTLENECK WHERE BOTTLENECK_IDENTIFIED='YES' AND ADVISOR_RUN_ID=2; ORDER BY PATH_ID, COMPONENT_ID; Path Comp ID ID Name Type Database CAPTURE_HNS CAPTURE HUB.NET CAPTURE_HNS CAPTURE HUB.NET APPLY_SPOKE1 APPLY HUB.NET APPLY_SPOKE2 APPLY HUB.NET Viewing Performance Statistics for Oracle Streams Components The performance of Oracle Streams components depends on several factors, including the computer equipment used in the environment and the speed of the network. A bottleneck component is one that might be performing poorly or one that is disabled. The following is the formatting for the query in the slide: COLUMN PATH_ID HEADING 'Path|ID' FORMAT 999 COLUMN COMPONENT_ID HEADING 'Comp|ID' FORMAT 999 COLUMN COMPONENT_NAME HEADING 'Name' FORMAT A12 COLUMN COMPONENT_TYPE HEADING 'Type' FORMAT A10 COLUMN COMPONENT_DB HEADING 'Database' FORMAT A15 If this query returns no results, the Oracle Streams Performance Advisor did not identify bottleneck components in your environment. However, if this query returns one or more bottleneck components, check their status. If they are disabled, try to enable them. For examples of other performance statistics, see Oracle Streams Concepts and Administration. Oracle Database 11g: Implement Streams

621 Using the UTL_SPADV Package
To collect and analyze statistics: 1. Connect as the Streams Administrator. 2. Execute the COLLECT_STATS procedure: 3. Execute the SHOW_STATS procedure: exec UTL_SPADV.COLLECT_STATS; SET SERVEROUTPUT ON SIZE 50000 exec UTL_SPADV.SHOW_STATS; Using the UTL_SPADV Package The UTL_SPADV package provides subprograms to collect and analyze statistics for the Oracle Streams components in a distributed database environment. The package uses the Oracle Streams Performance Advisor to gather statistics and includes the following procedures: COLLECT_STATS uses the Oracle Streams Performance Advisor to gather statistics about the Oracle Streams components and subcomponents in a distributed database environment. SHOW_STATS generates output that includes the statistics gathered by the COLLECT_STATS procedure. Oracle Database 11g: Implement Streams

622 Reading the UTL_SPADV.SHOW_STATS Output
Abbrev. Description A Apply process ANR Apply network receiver used by an apply process in a combined capture and apply configuration APC Coordinator process used by an apply process APR Reader server used by an apply process APS Apply server used by an apply process B Bottleneck C or CAP Capture process flwctrl Flow control idl Idle LMB Builder server used by a capture process (LogMiner builder) Abbrev. Description LMP Preparer server used by a capture process (LogMiner preparer) LMR Reader server used by a capture process (LogMiner reader) msgs Messages preceiver or PR Propagation receiver psender or PS Propagation sender Q Queue serial# Session serial number sec Second sid Session identifier sub_name Subcomponent name topev Top event Reading the UTL_SPADV.SHOW_STATS Output Use the abbreviations in the slide to read the output of the UTL_SPADV.SHOW_STATS procedure. Your output should be similar to the following: LEGEND <statistics>= <capture> [ <queue> <psender> <preceiver> <queue> ] <apply> <bottleneck> <capture> = '|<C>' <name> <msgs captured/sec> <msgs enqueued/sec> <latency> <bytes to apply/sec> 'LMR' <idl%> <flwctrl%> <topevt%> <topevt> 'LMP' (<parallelism>) <idl%> <flwctrl%> <topevt%> <topevt> 'LMB' <idl%> <flwctrl%> <topevt%> <topevt> 'CAP' <idl%> <flwctrl%> <topevt%> <topevt> <apply> = '|<A>' <name> <msgs applied/sec> <txns applied/sec> <latency> 'ANR' <idl%> <flwctrl%> <topevt%> <topevt> 'APR' <idl%> <flwctrl%> <topevt%> <topevt> 'APC' <idl%> <flwctrl%> <topevt%> <topevt> 'APS' (<parallelism>) <idl%> <flwctrl%> <topevt%> <topevt> <queue> = '|<Q>' <name> <msgs enqueued/sec> <msgs spilled/sec> <msgs in queue> Oracle Database 11g: Implement Streams

623 Oracle Database 11g: Implement Streams 18 - 623
Reading the UTL_SPADV.SHOW_STATS Output (continued) <psender> = '|<PS>' <name> <msgs sent/sec> <bytes sent/sec> <latency> <idl%> <flwctrl%> <topevt%> <topevt> <preceiver> = '|<PR>' <name> <idl%> <flwctrl%> <topevt%> <topevt> <bottleneck>= '|<B>' <name> <sub_name> <sid> <serial#> <topevt%> <topevt> OUTPUT PATH 3 RUN_ID 1 RUN_TIME 2007-JUN-13 12:02:12 |<C> CAPTURE_HNS E+06 LMR 95% 0% 3.3% "" LMP (1) 86.7% 0% 11.7% "" LMB 86.7% 0% 11.7% "" CAP 16.7% 71.7% 11.7% "" |<A> APPLY_SPOKE ANR 0% 80% 15% "" APR 93.3% 0% 6.7% "" APC 96.7% 0% 3.3% "" APS (2) 8.3% 0% 76.7% "CPU + Wait for CPU" |<B> APPLY_SPOKE1 APS % "CPU + Wait for CPU" PATH 3 RUN_ID 2 RUN_TIME 2007-JUN-13 12:03:12 |<C> CAPTURE_HNS E+06 LMR 95% 0% 1.7% "" LMP (1) 83.3% 0% 16.7% "" LMB 85% 0% 15% "" CAP 10% 81.7% 8.3% "" |<A> APPLY_SPOKE ANR 0% 83.3% 13.3% "" APR 88.3% 0% 11.7% "" APC 90% 0% 10% "" APS (2) 11.7% 0% 75% "CPU + Wait for CPU" |<B> APPLY_SPOKE1 APS % "CPU + Wait for CPU" . Note: This output is for illustrative purposes only and does not necessarily reflect realistic performance benchmarks. Actual performance characteristics vary depending on individual client configurations and conditions. Oracle Database 11g: Implement Streams

624 Performing a Streams Healthcheck
Healthcheck report: Provides current status of your Streams environment Confirms that prerequisites for Streams are met Identifies the database objects of interest for Streams Does not modify the Streams configuration Report output structure: Generic information Capture-process information Propagation information Apply-process information Rule analysis Object information Streams-related information Performing a Streams Healthcheck Before you can execute a Streams Healthcheck, download the scripts appropriate for your database release from Oracle Support MetaLink (Note ). The following examples are based on Oracle Database 11g (Release 1). Oracle Database 11g: Implement Streams

625 Interpreting a Streams Healthcheck
Use the online information from Oracle Support MetaLink (Note ) if you need assistance interpreting the Streams Healthcheck. Oracle Database 11g: Implement Streams

626 Oracle Database 11g: Implement Streams 18 - 626
Summary In this lesson, you should have learned how to: Describe and use the tools that are available for monitoring Streams Monitor LCR messages Monitor statistics for capture propagation and apply Respond to Oracle Streams alerts Check the trace files and alert log for problems Use Streams monitoring tools, including: Oracle Streams Performance Advisor UTL_SPADV Package Healthcheck Oracle Database 11g: Implement Streams

627 Practice 20 Overview: Monitoring Streams
This practice covers performing a Streams Healthcheck. EURO database AMER database HR_CAP_Q HR schema HR_APPLY_Q HR_PROPAGATION HR_CAP HR_CAP_2 HR_APPLY HR_APPLY_2 HR_PROPAGATION_2 Oracle Database 11g: Implement Streams

628 Troubleshooting Oracle Streams

629 Oracle Database 11g: Implement Streams 18 - 629
Objectives After completing this lesson, you should be able to: Describe how to troubleshoot Resolve typical problems that occur during: Capture Propagation Apply Objectives See Appendix D titled “Common Streams Error Messages” for information about troubleshooting. Oracle Database 11g: Implement Streams

630 Oracle Database 11g: Implement Streams 18 - 630
How to Troubleshoot Analyze issues. Check the trace files and alert log for problems. Perform a Streams Healthcheck. Troubleshoot specific Streams processes. Take corrective action. Respond to Oracle Streams alerts. Analyze and recover from configuration errors. Analyze and handle performance problems due to unavailable destination. Confirm that the issue is resolved. Perform a Streams Healthcheck again. Check the trace files and alert log again. How to Troubleshoot Troubleshooting is about identifying and resolving problems. Your workflow is always very similar: Analyze – Correct – Confirm. Several of the tools that you use are discussed in the lesson titled Monitoring Oracle Streams. A starting point is alerts. Oracle Streams generates alerts, which are warnings about potential problems. You begin troubleshooting by finding more information about the source of a problem. You can either use Enterprise Manager > Data Movement > Manage (in the Streams section) > Streams process page (such as the Capture tabbed page) or query data dictionary views in SQL*Plus. Relevant views depend on your current problem. They include the DBA_CAPTURE, DBA_QUEUE_SCHEDULES and DBA_APPLY views. When you understand your current problem, take the appropriate corrective action. This may include restarting a Streams process, if it stopped. If you encounter errors during the apply of a transaction, you need to correct the conditions in database objects that caused the error, and retry or delete the apply error transactions (which have been moved to an error queue). Finally, always confirm that your action really resolved the issue. Oracle Database 11g: Implement Streams

631 Troubleshooting Capture
Checklist: Is combined capture and apply (CCA) used? What is the state of the capture process? Is the database configuration correct? Are the rules configured correctly? Is transformation being used? Are there any error messages? Troubleshooting Capture The most common problems with capture are due to improper configuration. Either the database is not configured to support capture, or the rules are not defined or defined incorrectly. The next few slides provide some insight into what can go wrong during capture, how to identify the problem, and how to fix it. Oracle Database 11g: Implement Streams

632 Determining Whether CCA Optimization Is Used
SELECT CAPTURE_NAME, APPLY_NAME, APPLY_DBLINK, APPLY_MESSAGES_SENT, APPLY_BYTES_SENT FROM GV$STREAMS_CAPTURE; Capture Apply Database Number of Number of Name Name Link Msg Sent Bytes Sent CAPTURE_HNS APPLY_SPOKE1 SPOKE.NET Determining Whether CCA Optimization Is Used Combined capture and apply (CCA) is an automatic optimization that Streams uses when a capture process sends captured LCRs directly to a single apply process, which applies the changes. When a capture process uses CCA, the following columns in the GV$STREAMS_CAPTURE data dictionary view are populated: APPLY_NAME: The apply process to which the capture process sends captured LCRs. This column is populated only if CCA optimization is used. APPLY_DBLINK: The database link to the remote database if the apply process is at a remote database APPLY_MESSAGES_SENT: The number of messages sent by the capture process to the apply process since the capture process last started APPLY_BYTES_SENT: The number of bytes sent by the capture process to the apply process since the capture process last started If a capture process does not use CCA, the APPLY_NAME and APPLY_DBLINK columns are not populated and the APPLY_MESSAGES_SENT and APPLY_BYTES_SENT columns are 0 (zero). Oracle Database 11g: Implement Streams

633 Capture Process Status
Check the state of the capture process: Aborted or disabled? Check the appropriate trace file for messages. Enabled? Is the capture process capturing current changes? When was the last change made available for capture? What is the capture process doing? SELECT CAPTURE_NAME, ((SYSDATE - CAPTURE_MESSAGE_CREATE_TIME)*86400) "Redo scanning latency", CAPTURE_MESSAGE_CREATE_TIME FROM GV$STREAMS_CAPTURE; Capture Process Status A capture process captures changes only when it is enabled. You can check whether a capture process is enabled, disabled, or aborted by querying the DBA_CAPTURE data dictionary view. If the capture process is disabled, try restarting it. If the capture process is aborted, you may need to correct an error before you can restart it successfully. To determine why the capture process aborted, query the DBA_CAPTURE data dictionary view or check the trace file for the capture process. SELECT capture_name, status_change_time, error_number, error_message FROM DBA_CAPTURE WHERE status='ABORTED'; If a capture process is enabled but has not captured recent changes, the cause may be that the capture process has fallen behind. To check, you can query the GV$STREAMS_CAPTURE dynamic performance view to determine the redo log scanning latency, which specifies the number of seconds between the creation time of the most recent redo log event scanned by a capture process and the current time. This number may be relatively large immediately after you start a capture process. You can use the query displayed in the slide to determine the redo log scanning latency. Oracle Database 11g: Implement Streams

634 Oracle Database 11g: Implement Streams 18 - 634
Capture Process Status (continued) You can also check the latency of a capture process by determining the length of time that has passed since the last change was made available to the capture process. For a local capture process, the last archived redo entry available is the last entry from the online redo log flushed to an archived log file. For a downstream capture process, the last archived redo entry available is the redo entry with the most recent SCN in the last archived log file added to the LogMiner session used by the capture process. You can display information about the last redo entry that was made available to each capture process by running the following query: SELECT v.capture_name, c.capture_type, v.state, v.available_message_number, TO_CHAR(v.available_message_create_time, 'HH24:MI:SS MM/DD/YY') AVAILABLE_MESSAGE_CREATE_TIME FROM GV$STREAMS_CAPTURE v, DBA_CAPTURE c WHERE v.capture_name = c.capture_name; CAPTURE_NAME CAPTURE_TYPE STATE AVAILABLE_MESSAGE_NUMBER AVAILABLE_MESSAGE_CREATE_TIME CAPTURE_DOWNSTREAM_SITE DOWNSTREAM WAITING FOR REDO :33:20 10/16/07 If the latency is high for a local capture process, you may be able to improve performance by adjusting the setting of the PARALLELISM capture process parameter. The STATE column of GV$STREAMS_CAPTURE can also provide information about what the capture process is currently doing: INITIALIZING CAPTURING CHANGES EVALUATING RULE ENQUEUING MESSAGE SHUTTING DOWN ABORTING CREATING LCR WAITING FOR DICTIONARY REDO WAITING FOR REDO PAUSED FOR FLOW CONTROL DICTIONARY INITIALIZATION WAITING FOR APPLY TO START CONNECTING TO APPLY DATABASE WAITING FOR PROPAGATION TO START Note The GV$STREAMS_CAPTURE view can be queried only for enabled capture processes. The GV$STREAMS_TRANSACTIONS view contains only current transactions. When the COMMIT or ROLLBACK LCR is received, the information about capture transactions disappears from this view. Oracle Database 11g: Implement Streams

635 Determining Message Enqueuing Latency
ALTER SESSION SET nls_date_format='HH24:MI:SS MM/DD/YY'; COLUMN LAST_POST HEADING 'Secs since|last post' SELECT capture_name, total_messages_captured SCANNED, total_messages_enqueued ENQUEUED, (SYSDATE - capture_time) * LAST_POST, (enqueue_time - enqueue_message_create_time)*86400 "Latency (secs)", enqueue_message_create_time "Last Queued Msg Time", enqueue_time FROM GV$STREAMS_CAPTURE; Determining Message Enqueuing Latency You can find the following information about each capture process by running the query shown in the slide: The number of redo entries scanned by the capture process The number of messages selected for queuing (as a result of the rules for the capture process) The number of seconds since the capture process last recorded its status The message enqueuing latency, which specifies the number of seconds between when a message was recorded in the redo log and when the message was enqueued by the capture process The message creation time, which is the time that the data manipulation language (DML) or data definition language (DDL) change generated the redo information for the most recently enqueued message The enqueue time, which is when the capture process enqueued the message into its queue The ALTER SESSION statement sets the format of the date columns in 24-hour format with the time followed by the date. Oracle Database 11g: Implement Streams

636 Oracle Database 11g: Implement Streams 18 - 636
Determining Message Enqueuing Latency (continued) The query shown in the slide on the previous page returns an output that is similar to the following: Secs since CAPTURE_NAME SCANNED ENQUEUED last post Latency (secs) Last Queued Msg T ENQUEUE_TIME CAPTURE 11:21:26 08/30/07 11:21:31 08/30/07 Oracle Database 11g: Implement Streams

637 Confirming Capture Database Configuration
Is COMPATIBLE set to the appropriate version? Is the database in ARCHIVELOG mode? Has the archive process started? Is the Streams pool sized large enough? Confirming Capture Database Configuration The COMPATIBLE parameter must be set at a minimum to (or higher) or you will get the ORA error. To use downstream capture, this parameter must be set to (or higher) at both the source database and the downstream database. A downstream capture database with compatible set to 10.2 can capture changes for a database that has a compatible of To use the new Streams features introduced in Oracle Database 11g, this parameter must be set to (or higher). To create a capture process, the database must be in ARCHIVELOG mode. You should also have an archive process started, or the redo logs are not automatically archived, resulting in a database unable to perform any transactions until new redo log space is made available or the existing redo logs are manually archived. Streams Memory Management The Automatic Shared Memory Management feature manages the size of the Streams pool when the SGA_TARGET initialization parameter is set to a non-zero value. If the STREAMS_POOL_SIZE initialization parameter also is set to a non-zero value, Automatic Shared Memory Management uses this value as a minimum for the Streams pool. You can set a minimum size if your environment needs a minimum amount of memory in the Streams pool to function properly. Oracle Database 11g: Implement Streams

638 Oracle Database 11g: Implement Streams 18 - 638
Confirming Capture Database Configuration (continued) Streams Memory Management (continued) If the STREAMS_POOL_SIZE initialization parameter is set to a non-zero value, and the SGA_TARGET parameter is set to 0 (zero), the Streams pool size is the value specified by the STREAMS_POOL_SIZE parameter. If you plan to set the Streams pool size manually, you can use the V$STREAMS_POOL_ADVICE dynamic performance view to determine an appropriate setting for the STREAMS_POOL_SIZE initialization parameter. If both the STREAMS_POOL_SIZE and the SGA_TARGET initialization parameters are set to 0 (zero) and Automatic Memory Management is disabled, by default, the first use of Streams in a database transfers an amount of memory equal to 10% of the shared pool from the buffer cache to the Streams pool. The first use of Streams in a database is the first attempt to allocate memory from the Streams pool. Memory is allocated from the Streams pool in the following ways: A message is enqueued into a buffered queue. The message can be an LCR captured by a capture process, or it can be a user-enqueued LCR or message. A capture process is started. An apply process is started. Oracle Database 11g: Implement Streams

639 Confirming Supplemental Logging
Source database–level logging for primary and unique keys: SELECT table_name, scn, timestamp, SUPPLEMENTAL_LOG_DATA_PK , SUPPLEMENTAL_LOG_DATA_UI, SUPPLEMENTAL_LOG_DATA_FK , SUPPLEMENTAL_LOG_DATA_ALL FROM DBA_CAPTURE_PREPARED_TABLES; Table-level logging for primary and unique keys: SELECT owner, table_name, log_group_type FROM DBA_LOG_GROUPS; Check the columns in supplemental log groups: Confirming Supplemental Logging Lack of supplemental logging on required columns is the primary cause for apply errors. You can query a number of views to determine the current settings for supplemental logging, as described in the following list: V$DATABASE view SUPPLEMENTAL_LOG_DATA_FK column SUPPLEMENTAL_LOG_DATA_ALL column SUPPLEMENTAL_LOG_DATA_UI column SUPPLEMENTAL_LOG_DATA_MIN column DBA_LOG_GROUPS, ALL_LOG_GROUPS, and USER_LOG_GROUPS views ALWAYS column GENERATED column LOG_GROUP_TYPES column DBA_LOG_GROUP_COLUMNS, ALL_LOG_GROUP_COLUMNS, and USER_LOG_GROUP_COLUMNS views The LOGGING_PROPERTY column SELECT table_name, column_name, log_group_name, position FROM DBA_LOG_GROUP_COLUMNS; Oracle Database 11g: Implement Streams

640 Checking the Capture Process Rules
Check DBA_STREAMS_RULES or a similar view. Query for negative rule sets. Check for empty rule sets. Determine whether the system-created rule condition has been modified with DBMS_RULE_ADM. SELECT streams_name, rule_owner, rule_name, rule_condition, rule_set_type "TYPE", streams_rule_type "LEVEL", schema_name, object_name, subsetting_operation, dml_condition, same_rule_condition "Orig?" FROM DBA_STREAMS_RULES WHERE streams_type = 'CAPTURE'; Checking the Capture Process Rules If a capture process is behaving in an unexpected way, the problem may be that the rules in either the positive or negative rule set for the capture process are not configured properly. For example, if you expect a capture process to capture changes made to a particular table, but the capture process is not capturing these changes, the cause may be that the rules in the rule sets used by the capture process do not instruct the capture process to capture changes to the table. Also, if a rule is defined so that unsupported objects or data types are captured, this can cause the capture process to abort. You can check the rules for capture by querying the DBA_STREAMS_RULES or DBA_STREAMS_*_RULES views. If you use both positive and negative rule sets in your Streams environment, you need to know whether a rule returned by this view is in the positive or negative rule set for a particular Streams client. In general, a message satisfies the rule sets for a Streams client if no rules in the negative rule set evaluate to TRUE for the message, and at least one rule in the positive rule set evaluates to TRUE for the message. It is possible to modify the rule condition of a Streams rule. These modifications may change the behavior of the Streams clients using the Streams rule. If the value of the SAME_RULE_CONDITION column in the DBA_STREAMS_RULE view is NO, the rule has been modified. Query the RULE_CONDITION column to see the current condition for the rule. Oracle Database 11g: Implement Streams

641 Checking Transformations
Check whether there is a transformation specified for a rule. A rule-based transformation can modify a message when a rule in a positive rule set evaluates to TRUE for that message. SELECT rule_owner, rule_name, transform_type FROM DBA_STREAMS_TRANSFORMATIONS; Checking Transformations In Streams, a rule-based transformation is specified in a rule action context that has the name STREAMS$_TRANSFORM_FUNCTION in the name-value pair. The value in the name-value pair is the name of the PL/SQL procedure that performs the transformation. You can query the DBA_STREAMS_TRANSFORMATIONS view for information about declarative transformations. View the TRANSFORM_TYPE column to determine whether the transformation is DECLARATIVE or CUSTOM. If a transformation is configured on capture, this can affect the data that is placed into the stream and sent to all destination sites. Oracle Database 11g: Implement Streams

642 Monitoring Large Transactions
Use GV$STREAMS_TRANSACTION to monitor transactions processed by apply or capture processes: SELECT streams_name,streams_type, cumulative_message_count, first_message_time, XIDUSN, XIDSLT, XIDSQN, last_message_time, total_message_count FROM gv$streams_transaction ; Use the alert log at the capture database to check for long-running or large transactions. Monitoring Large Transactions GV$STREAMS_TRANSACTION provides information about transactions that are being processed by a Streams capture process or apply process. This view can be used to identify long-running or large transactions and to determine how many logical change records (LCRs) are being processed in each transaction. You can also use the alert log at capture database to check for long-running or large transactions.  This view contains information about captured LCRs only. It does not contain information about user-enqueued LCRs or user messages. This view shows only current information about LCRs that are being processed because they satisfied the rule sets for the Streams process at the time of the query. For a capture process, this view shows information only about changes in transactions that the capture process has converted to LCRs. It does not show information about all the active transactions present in the redo log. For apply processes, this view shows information only about LCRs that the apply process has dequeued. It does not show information about LCRs in the apply process’s queue. Oracle Database 11g: Implement Streams

643 Oracle Database 11g: Implement Streams 18 - 643
Monitoring Large Transactions (continued) Information about a transaction remains in the view until the transaction commits or until the entire transaction is rolled back. In the query shown in the slide, you are retrieving the following information: Streams process name Streams process type, which is either CAPTURE or APPLY XIDUSN: The Transaction ID undo segment number of the transaction XIDSLT: The Transaction ID slot number of the transaction XIDSQN: The Transaction ID sequence number of the transaction CUMULATIVE_MESSAGE_COUNT NUMBER: The number of LCRs processed in the transaction. If the Streams capture process or apply process is restarted while the transaction is being processed, this field shows the number of LCRs processed in the transaction since the Streams process was started. TOTAL_MESSAGE_COUNT NUMBER: The total Number of LCRs processed in the transaction by an apply process. This field does not pertain to capture processes. FIRST_MESSAGE_TIME DATE: The time stamp of the first LCR processed in the transaction. If a capture process is restarted while the transaction is being processed, this field shows the time stamp of the first LCR processed after the capture process was started LAST_MESSAGE_TIME DATE: The time stamp of the last LCR processed in the transaction Oracle Database 11g: Implement Streams

644 Troubleshooting Propagation
Checklist: Has the propagation been specified between the correct sites and queues? Is the queue_to_queue propagation parameter set to TRUE or FALSE in the DBA_PROPAGATION view? Does the database link exist? Is it working? Is the propagation enabled and scheduled properly? Are there any trace files or alert log messages? Do rules exist for the propagation? Are there any transformations specified for the propagation rules? Is the propagation getting error messages? Troubleshooting Propagation If a propagation is not propagating changes as expected, use the questions in the slide to identify and resolve propagation problems. Check the Alert Log. Oracle Database 11g: Implement Streams

645 Checking Propagation Configuration
Check the propagation name, source, and destination queue names. SELECT propagation_name, source_queue_owner||'.'|| source_queue_name SRC, destination_queue_owner ||'.'|| destination_queue_name DEST, destination_dblink DBLINK, QUEUE_TO_QUEUE FROM DBA_PROPAGATION; Checking Propagation Configuration If messages are not appearing in the destination queue of a propagation as expected, the propagation may not be configured to propagate messages from the correct source queue to the correct destination queue. For example, to check the source queue and destination queue for all propagation jobs, you can run the query in the slide or the following query: COLUMN SOURCE_QUEUE HEADING 'Source Queue' FORMAT A40 COLUMN DESTINATION_QUEUE HEADING 'Destination Queue' FORMAT A30 SELECT p.source_queue_owner||'.'|| g.global_name SOURCE_QUEUE, p.destination_dblink DESTINATION_QUEUE FROM DBA_PROPAGATION p, GLOBAL_NAME g; The output from this query is similar to the following: Source Queue Destination Queue To verify that the database link is configured correctly, connect as the propagation schedule owner and issue a query that uses the database link against a table at the destination site. Oracle Database 11g: Implement Streams

646 Checking the Propagation Schedule
Verify that the propagation is enabled and associated with a job queue process. Determine whether there are any failures or errors received. Query the date and time when the propagation schedule will be started. Determine the number of messages sent or received and the number of messages that have been acknowledged. Checking the Propagation Schedule The following query shows the statistics for all scheduled propagations: SELECT propagation_name, destination, message_delivery_mode, schedule_disabled, process_name, last_run_date, next_run_date, total_number, failures, last_error_time, last_error_msg FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p WHERE s.destination = p.destination_dblink; The SCHEDULE_DISABLED column (which appears as simply S) shows whether the schedule is disabled, with Y meaning that it is disabled. PROPAGATION_NAME is the name of the scheduled propagation. Oracle Database 11g: Implement Streams

647 Oracle Database 11g: Implement Streams 18 - 647
Checking the Propagation Schedule (continued) PROCESS_NAME is the name of the process that most recently executed this schedule. If errors have occurred, check the trace files that have this process name as part of the file name, such as orcl_j000_30244.trc, for any additional error information. LAST_RUN_DATE is the date of the last successful schedule execution. NEXT_RUN_DATE is the date and time of the next scheduled execution of this propagation schedule. If NEXT_RUN_DATE is NULL, the job is currently running or there is no scheduled interval for this propagation. TOTAL_NUMBER is the total number of messages that have been propagated. FAILURES is the number of times execution has failed. After each failure, the next run waits 30 seconds longer than the previous wait before reexecuting. The wait after the first failure is 30 seconds, 60 seconds after the second failure, 90 seconds after the third failure, and so on. After 16 successive failures, the schedule is disabled. LAST_ERROR_TIME is the time of the last unsuccessful execution, and LAST_ERROR_MESSAGE is the number and message text of the encountered error. For determining the number of messages sent by a propagation, as well as the number of acknowledgements being returned from the target site, query the V$PROPAGATION_SENDER view at the origination site and the V$PROPAGATION_RECEIVER view at the destination site. SELECT queue_schema, queue_name, dblink, schedule_status, high_water_mark, acknowledgement FROM V$PROPAGATION_SENDER; QUEUE_SCHEMA QUEUE_NAME DBLINK SCHEDULE_STATUS HIGH_WATER_MARK ACKNOWLEDGEMENT STRMADMIN STREAMS_QUEUE SITE2.NET SCHEDULE ENABLED SELECT src_queue_name, src_dbname, high_water_mark, acknowledgement FROM V$PROPAGATION_RECEIVER; SRC_QUEUE_NAME SRC_DBNAME HIGH_WATER_MARK ACKNOWLEDGEMENT STREAMS_QUEUE SITE1.NET Oracle Database 11g: Implement Streams

648 Checking the Propagation Rules
Check DBA_STREAMS_RULES or a similar view. Check for rule-based transformations. SELECT streams_name, rule_owner, rule_name, rule_condition, rule_set_type "Type of Rule", streams_rule_type "Rule Level", schema_name, object_name, subsetting_operation, dml_condition, same_rule_condition "Orig?" FROM dba_streams_rules WHERE streams_type = 'PROPAGATION'; SELECT s.rule_name, t.transform_function_name FROM DBA_STREAMS_RULES s, DBA_STREAMS_TRANSFORM_FUNCTION t WHERE s.rule_name = t.rule_name AND s.rule_owner = t.rule_owner AND s.streams_type = 'PROPAGATION'; Checking the Propagation Rules If a propagation is not behaving the way you expect, the problem may be that the rules in either the positive or negative rule set for the Streams client are not configured properly. You can check the rules for propagation by querying any of the DBA_STREAMS_*_RULES views. If you use both positive and negative rule sets in your Streams environment, it is important to know whether a rule returned by this view is in the positive or negative rule set for a particular Streams client. Rules in the negative rule set that evaluate to TRUE may be causing the propagation to discard messages for those objects. It is possible to modify the rule condition of a Streams rule. These modifications may change the behavior of the Streams clients using the Streams rule. If the value of the SAME_RULE_CONDITION column in the DBA_STREAMS_RULE view is NO, the rule has been modified. Query the RULE_CONDITION column to see the current condition for the rule. If you are sure that no global or schema rules are causing the unexpected behavior, check the table rules in the rule sets used by the propagation. To resolve the problem with improperly configured rules, you may need to add one or more rules, drop rules, add or remove subsetting conditions, or alter the rule to add or remove additional conditions. Oracle Database 11g: Implement Streams

649 Troubleshooting Apply
Checklist: Determine which capture processes use combined capture and apply. What is the state of the apply process? Enabled, disabled, or aborted Is the apply process configured correctly? Rules Source database Transformations Apply handlers Apply of captured messages or user-enqueued messages Are there any errors in the error queue or apply process errors? Troubleshooting Apply The next few slides show how to troubleshoot when changes are not applied at the destination site. If the Apply process is disabled: Was the apply process stopped by another DBA? Was the capture process ever started? If the Apply process is aborted: Check the appropriate trace file for messages. Are there errors in the error queue? The apply process must be enabled and there should be activity visible through the dynamic views for the apply process: GV$STREAMS_APPLY_COORDINATOR, GV$STREAMS_APPLY_READER, and GV$STREAMS_APPLY_SERVER. Streams is a rule-based system. Rules must be configured at each stage (capture, propagation, and apply) for the DML or DDL messages to be replicated. If apply is configured correctly but if the propagation rules are configured so that messages are being discarded, the apply process never receives the messages and thus cannot apply them to the destination database. Oracle Database 11g: Implement Streams

650 Checking Apply Process State and Configuration
Confirm that the apply process is ENABLED and configured to apply captured messages correctly: SELECT a.apply_name, a.apply_captured, p.source_database SOURCE_DB, a.status, a.rule_set_name, a.negative_rule_set_name, a.error_number, a.error_message FROM DBA_APPLY a, DBA_APPLY_PROGRESS p WHERE a.apply_name = p.apply_name; APPLY_NAME APPLY_CAPT SOURCE_DB STATUS RULE_SET_NAME NEGATIVE_RULE_SET_NAME ERROR_NUMBER ERROR_MESSAGE APPLY_SITE1_LCRS YES SITE1.NET ENABLED RULESET$_19 Checking Apply Process State and Configuration An apply process applies changes only when it is enabled. Query the STATUS column in DBA_APPLY to determine the state of the apply process. The possible values are: ENABLED DISABLED ABORTED If the apply process is disabled, do the following: Try to restart the apply process. If the apply process is aborted, you may need to correct an error before you can restart it successfully. Check the apply process parameters for any specified limits. Check whether the apply process is being stopped by a DBA or a script as part of the database shutdown process. If so, the apply process is not automatically restarted when the database instance is restarted. APPLY_CAPTURED should be YES for DML or DDL LCRs to be applied. For messages explicitly enqueued, or if LCRs are generated from a non-Oracle site, APPLY_CAPTURED should have a value of NO. If an apply process is not applying the expected type of messages, you may need to create a new apply process to apply the messages. Oracle Database 11g: Implement Streams

651 Oracle Database 11g: Implement Streams 18 - 651
Checking Apply Process State and Configuration (continued) If the source database is not what you expected, you most likely did not specify the source database name when configuring rules for the apply process. In that case, the apply process determines the source database for the LCRs it applies based on the first LCR it receives. An apply process can process LCRs only from a single source database. If you enqueue LCRs from more than one source database into the same queue, you can encounter this problem. To resolve the problem, modify or re-create the rules for the apply process so they include the desired source database. Another common error is incorrectly spelling the name of the source database. The correct value for the source database name is the GLOBAL NAME of the source database as returned from the SELECT GLOBAL_NAME FROM GLOBAL_NAME; query. If the CNUM_MSGS in V$BUFFERED_QUEUES increases consistently for the apply queue, but the V$STREAMS_APPLY_READER view shows no activity, check the source database name for the existing apply rules. Make sure that the rule sets assigned to the apply process are correct. Use additional queries to determine whether the rule sets are empty and, if not empty, whether the rules in them are configured properly. Oracle Database 11g: Implement Streams

652 Is the Apply Process Current?
If an apply process applies captured messages, you can query the following dictionary views to determine latency and activity: GV$STREAMS_APPLY_COORDINATOR DBA_APPLY_PROGRESS SELECT apply_name NAME, hwm_time "Apply Time", hwm_message_create_time "Msg Creation", (hwm_time - hwm_message_create_time)* "Latency in Seconds", hwm_message_number "Applied Message #" FROM GV$STREAMS_APPLY_COORDINATOR; Is the Apply Process Current? If an apply process has not applied recent changes, the cause may be that the apply process has fallen behind. You can check apply process latency by querying the GV$STREAMS_APPLY_COORDINATOR dynamic performance view. If apply process latency is high, you may be able to improve performance by adjusting the setting of the parallelism apply process parameter. The output for the query in the slide is similar to: NAME Apply Time Msg Creation Latency in Seconds Applied Message # APPLY_SITE1_LCRS 30/08/07 11:21:33 30/08/07 11:21: The differences between using GV$STREAMS_APPLY_COORDINATOR and DBA_APPLY_PROGRESS are the following: The apply process must be enabled when you run the query on the V$ view, whereas the apply process can be enabled or disabled when you query DBA_APPLY_PROGRESS. GV$STREAMS_APPLY_COORDINATOR may show the latency for a more recent transaction. Oracle Database 11g: Implement Streams

653 Oracle Database 11g: Implement Streams 18 - 653
Is the Apply Process Current? (continued) You can use the following query to display the capture-to-apply latency for individual messages by using the DBA_APPLY_PROGRESS view: SELECT APPLY_NAME, (APPLY_TIME-APPLIED_MESSAGE_CREATE_TIME)*86400 "Latency", TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') "Msg Creation", TO_CHAR(APPLY_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time", APPLIED_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS; With formatting, your output may look similar to the following: Applied Apply Process Message Name Latency Msg Creation Apply Time Number APPLY_SITE1_LCRS :05:23 02/28/07 14:09:02 02/28/ APPLY_SITE2_LCRS :29:21 02/28/07 13:13:22 02/28/ Note: This query does not take care of time zone differences. Oracle Database 11g: Implement Streams

654 Determining the Scope of an Apply Problem
Local issue? No Yes Check the message source: 1. Capture rules at the source site. 2. Propagation rules at: Source site All intermediate sites 1. Check the message destination: Apply processes and rules Instantiation SCNs Missing dictionary errors 2. Check the message source and all intermediate sites. Determining the Scope of an Apply Problem An apply process must dequeue messages from its queue before it can apply these messages. Rules for the capture process must be configured properly for the message to be created and enqueued. Rules for propagation at each intermediate site must also be set correctly to route the message to the destination queue. If the rules for capture or propagation are not configured properly, the messages may never reach the apply process queue. Therefore, when you troubleshoot the apply process, it helps to first determine the scope of the problem. Is it a local issue, or does it affect more than one site? If you expect an apply process to apply changes to a particular table, but the apply process is not applying these changes, it may be that the positive rules for the apply process do not evaluate to TRUE for the table LCRs or that a negative rule causes changes for the table to be discarded. If no sites are applying any changes at the destination sites, start from the source site and verify that the rules are configured correctly for capturing changes to the object. Then verify that propagation rules exist to send the changes to the proper queue at the destination site. Proceed in this manner for each site that leads to the destination site. You must verify that the necessary positive rules exist, that the rule sets are not empty, and that there are no negative rules for the messages and no transformations altering the message in a way that invalidates the apply or propagation rules. Oracle Database 11g: Implement Streams

655 Checking for Apply Process Rules
Check DBA_STREAMS_RULES or a similar view. Check for rule-based transformations. SELECT streams_name, rule_owner, rule_name, rule_condition, rule_set_type "TYPE", streams_rule_type "LEVEL", schema_name, object_name, subsetting_operation, dml_condition, same_rule_condition "Orig?" FROM dba_streams_rules WHERE streams_type = 'APPLY'; SELECT s.rule_name, t.user_function_name FROM DBA_STREAMS_RULES s, DBA_STREAMS_TRANSFORMATIONS t WHERE s.rule_name = t.rule_name AND s.rule_owner = t.rule_owner AND s.streams_type = 'APPLY'; Checking for Apply Process Rules If an apply process is behaving in an unexpected way, the problem may be that the rules in either the positive or negative rule set for the apply process are not configured properly. For example, if you expect an apply process to apply changes to a particular table, but the apply process is not applying these changes, the cause may be the rules in the rule sets used by the apply process. You can check the rules for apply by querying any of the DBA_STREAMS_*_RULES data dictionary views. If you use both positive and negative rule sets in your Streams environment, it is important to know whether a rule returned by this view is in the positive or negative rule set for a particular Streams client. In general, a message satisfies the rule sets for a Streams client if no rules in the negative rule set evaluate to TRUE for the message, and if at least one rule in the positive rule set evaluates to TRUE for the message. It is possible to modify the rule condition of a Streams rule. These modifications may change the behavior of the Streams clients using the Streams rule. If the value of the SAME_RULE_CONDITION column in the DBA_STREAMS_RULE view is NO, the original system-created rule condition has been modified. Query the RULE_CONDITION column to see the current condition for the rule. Oracle Database 11g: Implement Streams

656 Checking for Custom Apply or Error Handlers
Check whether there is a custom apply procedure or an error handler specified for the object. DML or error handlers DDL, message, or precommit handlers SELECT object_owner, object_name, operation_name, user_procedure, apply_name, error_handler FROM DBA_APPLY_DML_HANDLERS; SELECT apply_name, ddl_handler, precommit_handler, message_handler FROM DBA_APPLY; Checking for Custom Apply or Error Handlers You can use PL/SQL procedures to handle messages dequeued by an apply process in a customized way. These handlers include DML handlers, DDL handlers, precommit handlers, and message handlers. A user procedure can be used for any customized processing of LCRs. For example, if you want each insert into a particular table at the source database to result in inserts into multiple tables at the destination database, you can create a user procedure that processes INSERT operations on the table to accomplish this. Or, if you want to log DDL changes before applying them, you can create a user procedure that processes DDL operations to accomplish this. Typically, DML handlers and DDL handlers are used in Streams replication environments to perform custom processing of LCRs, but these handlers may be used in nonreplication environments as well. For example, such handlers may be used to record changes made to database objects without replicating these changes. If an apply process is not behaving as expected, check the handler procedures used by the apply process and then correct any flaws. You can find the names of these procedures by querying the DBA_APPLY_DML_HANDLERS and DBA_APPLY data dictionary views. You may need to modify a handler procedure or remove it to correct an apply problem. Oracle Database 11g: Implement Streams

657 Checking the Error Queue
Query DBA_APPLY_ERROR to determine whether there are errors in the error queue. SELECT apply_name, source_database, local_transaction_id, message_number error_message FROM DBA_APPLY_ERROR; APPLY_NAME SOURCE_DATABASE LOCAL_TRANSACTION_ID . MESSAGE_NUMBER ERROR_MESSAGE . APPLY_SITE1_LCRS SITE1.NET ORA-00001: unique constraint (HR.COUNTRY_C_ID_PK_NOIOT) violated Checking the Error Queue When an apply process cannot apply a message, it moves the message and all the other messages in the same transaction into the error queue. You must check for apply errors periodically to see whether there are any transactions that could not be applied. You can check for apply errors by querying the DBA_APPLY_ERROR view. You can reexecute a particular transaction from the error queue or all the transactions in the error queue. You can retry an error transaction by running the EXECUTE_ERROR procedure, and specify a user procedure to modify one or more messages in the transaction before the transaction is executed. The modifications should enable successful execution of the transaction. The messages in the transaction can be LCRs or user messages. If the apply parameter disable_on_error is set to TRUE (default value), the apply process is disabled when it encounters an error. Oracle Database 11g: Implement Streams

658 Oracle Database 11g: Implement Streams 18 - 658
Checking the Error Queue (continued) If an error is found in the error queue, as shown in the slide, you have three options: Correct the problem and reexecute the transaction. Correct the problem and retry all the transactions for a particular apply process. Remove the transaction from the error queue. After the error has been cleared, you can restart the apply process, but you may want to change the value of the disable_on_error parameter before you do so. You can also use the EXECUTE_ERRROR procedure in the DBMS_APPLY_ADM package to reexecute a deferred transaction that did not initially complete successfully in the security context of the original receiver of the transaction. To view the contents of an LCR in the error queue, you can use the PRINT_TRANSACTION and PRINT_ERRORS procedures to display this information. See the Oracle Streams Concepts and Administration Guide for more information. Oracle Database 11g: Implement Streams

659 Oracle Database 11g: Implement Streams 18 - 659
Common Apply Errors ORA-26786: no data found (on top of ORA-01403) ORA-26787: no data found (on top of ORA-01403) ORA-26687: no instantiation SCN provided ORA-00001: unique constraint (%s.%s) violated ORA-06550: line x, column y: ORA in STREAMS process ORA-12801: error signaled in parallel query server P000 ORA-06550: line 1, column 15: PLS-00201: identifier 'HR.HR_TO_DEMO' must be declared … Common Apply Errors In earlier releases, an ORA error was returned for delete and update conflicts. In Oracle Database 11g, two new error messages make it easier to handle apply errors in DML handlers and error handlers. They appear on top of the ORA error. An ORA error is raised if the row to be updated or deleted does not exist in the target table. An ORA error is raised when the row exists in the target table, but the values of some columns do not match those of the row LCR. If your Streams processes begin comparing elements that are not synchronized, you receive the ORA error. You receive the ORA error, if transactions are out-of-order. You can use an error or DML handler to prevent this. The ORA error is often caused by missing privileges. This typically causes the apply process to abort without errors in the error queue. Then the trace file for the apply coordinator reports the full error stack. Note: For these and more error details, see Appendix D titled “Common Streams Error Messages.” Oracle Database 11g: Implement Streams

660 Oracle Database 11g: Implement Streams 18 - 660
Summary In this lesson, you should have learned how to: Describe how to troubleshoot Resolve typical problems that occur during: Capture Propagation Apply Oracle Database 11g: Implement Streams

661 Practice 21 Overview: Troubleshooting Streams
This practice covers analyzing a Streams Healthcheck. EURO database AMER database HR_CAP_Q HR schema HR_APPLY_Q HR_PROPAGATION HR_CAP HR_CAP_2 HR_APPLY HR_APPLY_2 HR_PROPAGATION_2 Oracle Database 11g: Implement Streams

662 Message Queuing Concepts

663 Oracle Database 11g: Implement Streams 18 - 663
Objectives After completing this lesson, you should be able to: Configure secure queue users for enqueuing and dequeuing messages Configure a messaging client for dequeuing messages Describe the process of enqueuing and dequeuing messages Configure message notification Manage Advanced Queuing queues, queue tables, propagation schedules, and transformations by using Enterprise Manager Objectives Streams enables messaging with queues of the SYS.AnyData type. These queues can stage user messages whose payloads are of the SYS.AnyData type. A SYS.AnyData payload can be a wrapper for payloads of different data types. User applications can explicitly enqueue messages into a queue. The user applications can format these user-enqueued messages as LCRs or user messages. These user-enqueued messages can be dequeued by an apply process, a messaging client, or a user application. Messages that were enqueued explicitly into a queue can be propagated to another queue or explicitly dequeued from the same queue. Streams includes the features of Oracle Streams Advanced Queuing (AQ), which supports all the standard features of message queuing systems, including: Multiconsumer queues Publish and subscribe Content-based routing Internet propagation Transformations Gateways to other messaging subsystems Oracle Database 11g: Implement Streams

664 Overview of Advanced Queuing
Advanced Queuing enables users or applications to place messages on queues for propagation. Each user-enqueued message is dequeued and processed by one of the consumers. A user-enqueued message stays in the queue until a consumer dequeues it or the message expires. User-enqueued messages are said to be propagated when they are passed on to another queue in the same database or in a remote database. Overview of Advanced Queuing Oracle Streams Advanced Queuing (AQ) provides database-integrated message queuing functionality. It is built on top of Oracle Streams and leverages the functions of Oracle Database so that messages can be stored persistently, propagated between queues on different computers and databases, and transmitted using Oracle Net Services and HTTP(S). Advanced Queuing allows users or applications to place messages on queues. For example, in Web-based business producer applications enqueue messages and consumer applications dequeue messages. Each message is dequeued and processed by one of the consumers. A message stays in the queue until a consumer dequeues it or the message expires. Enqueued messages are said to be propagated when they are reproduced on another queue, which can be in the same database or in a remote database. Because Oracle Streams AQ is implemented in database tables, all operational benefits of high availability, scalability, and reliability are also applicable to queue data. Messages can be queried using standard SQL. This means that you can use SQL to access the message properties, the message history, and the payload. With SQL access, you can also do auditing and tracking. All available SQL technology, such as indexes, can be used to optimize access to messages. Oracle Database 11g: Implement Streams

665 Fundamental Terminology
User-enqueued messages Enqueue Dequeue Agent Subscriber Fundamental Terminology A message is the smallest unit of information that is inserted into and retrieved from a queue. A message consists of control information (metadata) and payload (data). Captured messages are always LCRs. User-enqueued messages: Include both logical change records (LCRs) and non-LCRs Queue: Is a repository for messages. Queues are stored in queue tables. Each queue table is a database table and contains one or more queues. Each queue table contains a default exception queue. All queues within a queue table must contain messages with the same payload definition. Enqueue: Is used to place a message in a queue Dequeue: Is used to consume a message. A single message can be processed and consumed by more than one consumer. Agent: Is an end user or an application that uses a queue. An agent is associated with a database user, which is assigned privileges that are required by the end user or application. You can have multiple agents associated with the same database user. Subscriber: Is a name or address that is used to identify a user or an application interested in messages in a multiconsumer queue. Subscribers are also referred to as consumers. Oracle Database 11g: Implement Streams

666 Interfaces to Oracle Streams
OCI OCCI JMS Interfaces to Oracle Streams Streams includes the features of Advanced Queuing (AQ), which supports all the standard features of message-queuing systems, including multiconsumer queues, publish and subscribe, content-based routing, transformations, and gateways to other messaging subsystems. Oracle Streams supports a variety of open interfaces for enqueuing and dequeuing, including C, Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), Java Message Service (JMS), and PL/SQL. Messaging Gateway is a feature of the Oracle database that provides propagation between Oracle queues and non-Oracle message queuing systems. Messages enqueued in an Oracle queue are automatically propagated to a non-Oracle queue, and the messages enqueued in a non-Oracle queue are automatically propagated to an Oracle queue. Messaging Gateway supports the native message format for the non-Oracle messaging system and the specification of user-defined transformations that are invoked during propagation. Heterogeneous Services and an Oracle Transparent Gateway can be used to apply changes encapsulated in LCRs directly to database objects in a non-Oracle database. The Streams tool in the Oracle Enterprise Manager Console provides some configuration, administration, and monitoring capabilities to help you manage your environment. Gateway products Enterprise Manager Interface Oracle Database 11g: Implement Streams

667 Oracle Database 11g: Implement Streams 18 - 667
Enqueuing Messages Messages can be enqueued into a staging queue in three ways: A capture process enqueues DML and DDL changes as LCR messages. An enqueue directive instructs an apply process to enqueue certain messages into the staging queue. A user application enqueues user messages or LCRs as messages of the SYS.AnyData type. Enqueuing Messages User-enqueued messages can contain LCRs or any other type of message. Any user message that is explicitly enqueued by a user or an application is called a user-enqueued message. Messages that were enqueued by a user procedure called from an apply process are also user-enqueued messages. You can specify a destination queue for a rule by using the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package. If an apply process has such a rule in its positive rule set, and a message satisfies the rule, the apply process enqueues the message into the destination queue. A message that has been enqueued into a queue by using the SET_ENQUEUE_DESTINATION procedure is the same as any other user-enqueued message. Such messages can be manually dequeued, propagated to another queue, or applied by an apply process created with the apply_captured parameter set to FALSE. Oracle Database 11g: Implement Streams

668 Oracle Database 11g: Implement Streams 18 - 668
Secure Queue Users For a user to be able to enqueue or dequeue messages from a secure agent queue, you must configure the user as a secure queue user. Use DBMS_STREAMS_ADM.SET_UP_QUEUE to configure a secure queue user. BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'ix.streams_queue_table', queue_name => 'streams_queue', queue_user => 'ix'); END; / Secure Queue Users An Advanced Queuing (AQ) agent is a queue user. This can be an end user or an application. There are two types of agents: Users who place messages in a queue (enqueuing), who are called producers Users who retrieve messages from a queue (dequeuing), who are called consumers or subscribers Any number of AQ agents may connect to the queue at a given time. Agents insert messages into a queue and retrieve messages from the queue by using the Oracle Streams or Oracle Streams AQ operational interfaces. To successfully enqueue messages into a queue, the current user must be mapped to a unique AQ agent with the same name as the current user. You can run the DBMS_STREAMS_ADM.SET_UP_QUEUE procedure and specify a user as the queue user to grant the necessary privileges to the user to perform enqueues. The AQ agent is created automatically when you run SET_UP_QUEUE and specify a queue user. Queue creation is skipped if the queue already exists, but a new queue user is configured if one is specified. After a secure queue user is created, that user will still not be able to successfully enqueue messages into the queue until a subscriber exists for the messages. Oracle Database 11g: Implement Streams

669 Subscriptions and Recipient Lists
The SYS.AnyData queue is a multiconsumer queue that allows consumption of a message by more than one consumer. A list of consumers for a message is identified by using: Subscription Add a subscription to a queue by using the DBMS_AQADM.ADD_SUBSCRIBER procedure. Recipient list Specified in the recipient_list message property when the message is enqueued Subscriptions and Recipient Lists Oracle Streams uses multiconsumer queues, which means that more than one consumer can process or consume a single message. For example, an apply process and a user application are two different consumers. There are two methods of identifying the list of consumers for a message: subscriptions and recipient lists. Subscriptions You can add a subscription to a queue by using the AQ$_AGENT parameter of the DBMS_AQADM.ADD_SUBSCRIBER procedure. All consumers that you add as subscribers to a multiconsumer queue must have unique values for the AQ$_AGENT parameter. You cannot add subscriptions to single-consumer queues or exception queues. You can remove a subscription by using the DBMS_AQADM.REMOVE_SUBSCRIBER procedure. Oracle Database 11g: Implement Streams

670 Oracle Database 11g: Implement Streams 18 - 670
Subscriptions and Recipient Lists (continued) Recipient Lists In some situations, it may be desirable to enqueue a message that is targeted at a specific set of consumers rather than the default list of subscribers. You can accomplish this by specifying a recipient list at the time of enqueuing the message. You do not need to specify subscriptions for a multiconsumer queue if messages are enqueued with a recipient list that is specified in the message properties. If a recipient list is specified during enqueue, it overrides the subscription list. The consumers that are specified in the recipient list may or may not be subscribers for the queue. An error is raised if the queue does not have any subscribers and the enqueue does not specify a recipient list. Messages that have a specified recipient list will not be available for dequeue by other subscribers of the queue. In PL/SQL, you specify the recipient list by adding elements to the recipient_list field of the message_properties record. With Oracle Call Interface (OCI), you specify the recipient list by using the OCISetAttr procedure to specify an array of OCI_DTYPE_AQAGENT descriptors as the recipient list (OCI_ATTR_RECIPIENT_LIST attribute) of an OCI_DTYPE_AQMSG_PROPERTIES message properties descriptor. Oracle Database 11g: Implement Streams

671 Procedures for Configuring AQ Agents
DBMS_AQADM.CREATE_AQ_AGENT ( agent_name IN VARCHAR2, enable_http IN BOOLEAN DEFAULT FALSE, enable_smtp IN BOOLEAN DEFAULT FALSE, enable_anyp IN BOOLEAN DEFAULT FALSE); DBMS_AQADM.ENABLE_DB_ACCESS ( agent_name IN VARCHAR2, db_username IN VARCHAR2) DBMS_AQADM.ADD_SUBSCRIBER ( queue_name IN VARCHAR2, subscriber IN sys.aq$_agent, rule IN VARCHAR2 DEFAULT NULL, transformation IN VARCHAR2 DEFAULT NULL); Procedures for Configuring AQ Agents You can use the following procedures when configuring AQ agents as subscribers to queues: CREATE_AQ_AGENT: Manually creates an AQ agent to interact with secure queues. You can also use this procedure to register an agent for AQ Internet access by using HTTP/SMTP protocols. ENABLE_DB_ACCESS: Grants an AQ agent the privileges of a specific database user. You should have previously created the AQ agent by using the CREATE_AQ_AGENT procedure. ADD_SUBSCRIBER: Adds a default subscriber to a queue The DBMS_STREAMS_ADM.SET_UP_QUEUE procedure performs the actions of both the CREATE_AQ_AGENT and ENABLE_DB_ACCESS procedures when configuring a secure queue user. Oracle Database 11g: Implement Streams

672 Oracle Database 11g: Implement Streams 18 - 672
Creating a Subscriber To configure a secure queue user as a subscriber, thus enabling them to dequeue messages or messages from a queue, use the: SYS.AQ$_AGENT type to specify the AQ agent DBMS_AQADM.ADD_SUBSCRIBER procedure to specify that the agent is a subscriber to the queue DECLARE subscriber SYS.AQ$_AGENT; BEGIN subscriber := SYS.AQ$_AGENT('IX',NULL,NULL); SYS.DBMS_AQADM.ADD_SUBSCRIBER( queue_name => 'ix.streams_queue', subscriber => subscriber); END; / Creating a Subscriber to a SYS.AnyData Queue In the example in the slide, the AQ agent named IX that you configured previously is configured as a subscriber of the IX.STREAMS_QUEUE queue. This means the IX user is allowed to explicitly dequeue messages from the queue. Oracle Database 11g: Implement Streams

673 Viewing Subscriptions and Performance
Subscriptions can be viewed in the following tables and views: DBA_SUBSCR_REGISTRATIONS USER_SUBSCR_REGISTRATIONS V$SUBSCR_REGISTRATION_STATS Performance information can be viewed in these new tables and views: V$PERSISTENT_QUEUES V$PERSISTENT_SUBSCRIBERS V$PERSISTENT_PUBLISHERS DBA_HIST_PERSISTENT_QUEUES DBA_HIST_PERSISTENT_SUBS Viewing Subscriptions and Performance For diagnosibility of subscription registrations, the data dictionary has been enhanced in Oracle Database 11g with additional information and provides two views: DBA_SUBSCR_REGISTRATIONS and USER_SUBSCR_REGISTRATIONS. DBA_SUBSCR_REGISTRATIONS provides information for diagnosibility of subscription registrations of any user. Oracle Database 11g: Implement Streams

674 Oracle Database 11g: Implement Streams 18 - 674
Viewing Subscriptions and Performance (continued) The following are details about the *_SUBSCR_REGISTRATIONS view: Column Datatype NULL REG_ID VARCHAR2(64) SUBSCRIPTION_NAME VARCHAR2(128) NOT NULL LOCATION_NAME VARCHAR2(256) NOT NULL USER# NUMBER NOT NULL USER_CONTEXT RAW(128) CONTEXT_SIZE NUMBER NAMESPACE VARCHAR2(9) PRESENTATION VARCHAR2(7) VERSION VARCHAR2(5) STATUS VARCHAR2(8) ANY_CONTEXT ANYDATA CONTEXT_TYPE NUMBER QOSFLAGS VARCHAR2(13) PAYLOAD_CALLBACK VARCHAR2(4000) TIMEOUT TIMESTAMP(6) REG_TIME TIMESTAMP(6) WITH TIME ZONE NTFN_GROUPING_CLASS VARCHAR2(4) NTFN_GROUPING_VALUE NUMBER NTFN_GROUPING_TYPE VARCHAR2(7) NTFN_GROUPING_START_TIME TIMESTAMP(6) WITH TIME ZONE NTFN_GROUPING_REPEAT_COUNT VARCHAR2(40) Statistics on Subscription Registrations For diagnosibility of notifications, there is a new view called V$SUBSCR_REGISTRATION_STATS. SQL> Desc V$SUBSCR_REGISTRATION_STATS Name Null? Type REG_ID NUMBER NUM_NTFNS NUMBER NUM_GROUPING_NTFNS NUMBER LAST_NTFN_SENT_TIME TIMESTAMP(3) WITH TIME ZONE TOTAL_EMON_LATENCY NUMBER TOTAL_PLSQL_EXEC_TIME NUMBER LAST_ERR VARCHAR2(30) LAST_ERR_TIME TIMESTAMP(3) WITH TIME ZONE Oracle Database 11g: Implement Streams

675 Oracle Database 11g: Implement Streams 18 - 675
Messaging Client User A messaging client: Consumes user-enqueued messages from a SYS.AnyData queue based on rules Is invoked by a user or application Is associated with only one database user A messaging client user: Creates the messaging client by using procedures in the DBMS_STREAMS_ADM package Is granted the privileges to dequeue from the queue Can be associated with many messaging clients Messaging Client User A messaging client consumes user-enqueued messages when it is invoked by an application or a user. You use rules to specify which user-enqueued messages in the queue are dequeued by a messaging client. These user-enqueued messages may be user-enqueued LCRs or user-enqueued messages. You create a messaging client by specifying dequeue for the streams_type parameter when you run one of the following procedures of DBMS_STREAMS_ADM: ADD_MESSAGE_RULE ADD_TABLE_RULES ADD_SUBSET_RULES ADD_SCHEMA_RULES ADD_GLOBAL_RULES The user who creates a messaging client is the messaging client user and is granted the privileges to dequeue messages that satisfy the messaging client rule sets from the queue by using the messaging client. A messaging client can be associated with only one user, but one user may be associated with many messaging clients. Oracle Database 11g: Implement Streams

676 Creating a Messaging Client
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES ( table_name => 'oe.orders', streams_type => 'dequeue', streams_name => 'ORDERS_DEQ', queue_name => 'ix.streams_queue', source_database => 'site1.net', inclusion_rule => true); END; / Creating a Messaging Client A messaging client is a type of Streams client that enables users and applications to dequeue messages from a staging queue based on rules. A messaging client consumes user-enqueued messages when it is invoked by an application or a user. You use rules to specify which user-enqueued messages in the queue are dequeued by a messaging client. These user-enqueued messages may be LCRs (as demonstrated in the slide) or messages. You can create a messaging client by specifying DEQUEUE for the streams_type parameter in the following DBMS_STREAMS_ADM procedures: ADD_TABLE_RULES ADD_SUBSET_RULES ADD_SCHEMA_RULES ADD_GLOBAL_RULES ADD_MESSAGE_RULE The user who creates a messaging client is granted the privileges to dequeue from the queue by using the messaging client. This user is the messaging client user. The messaging client user can dequeue messages that satisfy the messaging client rule sets. Application dequeue using messaging client Oracle Database 11g: Implement Streams

677 Oracle Database 11g: Implement Streams 18 - 677
Creating a Messaging Client (continued) Unlike propagations and apply processes, which propagate or apply messages automatically when they are running, a messaging client does not automatically dequeue or discard messages. Instead, a messaging client must be used by a user or application to dequeue or discard messages. Oracle Database 11g: Implement Streams

678 Creating Messaging Client Rules
BEGIN DBMS_STREAMS_ADM.ADD_MESSAGE_RULE ( message_type => 'ix.order_event_typ', rule_condition => ':msg.delivery_date < SYSDATE', streams_type => 'dequeue', streams_name => 'ORDERS_DEQ', queue_name => 'ix.streams_queue', inclusion_rule => true); END; / Creating Messaging Client Rules When you use a rule to specify a Streams task that is relevant only for a user-enqueued message of a specific message type, you are specifying a message rule. You can specify message rules for propagations, apply processes, and messaging clients. The ADD_MESSAGE_RULE procedure in the DBMS_STREAMS_ADM package enables you to configure messaging clients and apply processes. If the streams_type parameter is set to DEQUEUE, the ADD_MESSAGE_RULE procedure creates a dequeue rule for the specified messaging client. You specify a messaging client by providing the client name as the value for the streams_name parameter. If the messaging client does not exist, it is created. You specify the condition for this rule by using the rule_condition parameter. The rule condition for a message rule must use the variable name :msg, which corresponds to the type of the message payload. For example, suppose the message payload type is usr_msg, which has the following attributes: owner, name, and message. With this type, a possible rule condition is: ':msg.owner = ''HR'' AND '||':msg.name = ''EMPLOYEES''' A user or application uses the messaging client to dequeue user-enqueued messages that satisfy the rule condition of positive rules, and to discard user-enqueued messages that satisfy the rule condition of negative rules. Oracle Database 11g: Implement Streams

679 User-Created Messages
Must be explicitly enqueued Can be a message of any data type or user-defined type Must be wrapped in a SYS.AnyData type to be enqueued in a SYS.AnyData queue Can also be a Java Message Service (JMS) type message User-Created Messages User-created messages, which are not LCRs, consist of data converted into a SYS.AnyData type message. The data can be a simple data type, such as NUMBER, or it can be a user-defined type. To enqueue these messages, you can use: PL/SQL: DBMS_STREAMS_MESSAGING.ENQUEUE DBMS_AQ.ENQUEUE DBMS_AQ.ENQUEUE_ARRAY Oracle Call Interface (OCI): OCIAQEnq() OCIAQEnqArray() Oracle C++ Call Interface: send() method of the Producer class Java Message Service (JMS): AQjmsTopicPublisher.publish() QueueSender.send() Oracle Database 11g: Implement Streams

680 User-Enqueued Messages
The message payload is of the SYS.AnyData type. You can enqueue messages with different types of payloads into a Streams staging queue by wrapping the message in the SYS.AnyData type: User-created LCRs Standard data types User-defined object types User-Enqueued Messages You can use both the SYS.AnyData type queues and typed queues for messaging with Streams. Oracle Streams AQ offers the message queuing interface for both types of queues. If a SYS.AnyData type queue is used for messaging, some data types are not supported, and Internet access to the queue is not available. If you use a typed queue for messaging, capture and apply cannot be defined on the queue. By using SYS.AnyData wrappers for message payloads, applications can enqueue messages of different types into a single queue and dequeue these messages, either explicitly by using a dequeue API or implicitly by using an apply process with a handler. If the application is on a remote site, the messages can be propagated to the remote site and the application can dequeue the messages from a local queue in the remote database. Alternatively, a remote application can dequeue messages directly from the source queue by using a variety of standard protocols, such as PL/SQL and OCI. SYS.AnyData type Application Oracle Database 11g: Implement Streams

681 Wrapping Message Payloads
Use the CONVERT<data_type> static functions of the SYS.AnyData type, where data_type is the type of object to wrap. All data types are supported except: CLOB, NCLOB, BLOB, and ROWID User-defined types with LOB attributes (including CLOB, NCLOB, and BLOB) Nested table SYS.AnyData.ConvertObject( ix.order_event_typ( 2424, 3350, 146, 'Elia', 'Fawcett', 4, TO_DATE('18-AUG-02')) ) Wrapping Message Payloads You can wrap almost any type of payload in a SYS.AnyData type. To do this, use the CONVERTdata_type static functions of the SYS.AnyData type, where data_type is the type of data to wrap. These functions take the data as input and return a SYS.AnyData object. The example in the slide wraps the IX.ORDER_EVENT_TYP with the specified attribute values in a SYS.AnyData type. You cannot wrap the following data types in a SYS.AnyData message: Nested table ROWID User-defined types with an attribute of type CLOB, BLOB, BFILE, or VARRAY Here are some more examples of creating SYS.AnyData payloads: VARCHAR2 SYS.AnyData.ConvertVarchar2('Please call my office - Tom') NUMBER SYS.AnyData.ConvertNumber('16') User-defined type: SYS.AnyData.ConvertObject(oe.cust_address_typ( '1646 Brazil Blvd','361168','Chennai','Tam', 'IN')) Oracle Database 11g: Implement Streams

682 Streams Messages with Object Types
Each user-defined type that is used in a message must exist at every database, where the message may be staged in a queue. The type must have the same name, owner, and form at each database. Type evolution and inheritance is not supported. The type’s object identifier (OID) does not have to be the same for each database. Streams Messages with Object Types If you plan to enqueue, propagate, or dequeue user-defined object type messages in a Streams environment, each type that is used in these messages must exist at every database where the message may be staged in a queue. Some environments use directed networks to route messages through intermediate databases before they reach their destination. In such environments, the type must exist at each intermediate database, even if the messages of this type are never explicitly enqueued or dequeued at a particular intermediate database. In addition, the following requirements must be met for such types: The type name must be the same at each database. The type must be in the same schema at each database. The shape of the type must match exactly at each database. The type cannot use inheritance or type evolution at any database. The type cannot contain LOBs or ROWIDs. Object identifiers (OID) do not need to be the same for each database. Oracle Database 11g: Implement Streams

683 Streams Messages Containing LCRs
An explicitly enqueued message can be: An LCR wrapped in the SYS.AnyData type A user-defined type that contains an LCR as an attribute id LCR LCR wrapped in the SYS.AnyData type Object type with the LCR attribute wrapped in the SYS.AnyData type Streams Messages Containing LCRs Because you can enqueue a message of any type in a SYS.AnyData queue, you can also explicitly enqueue LCRs. The LCR can be wrapped directly into a SYS.AnyData type, or it can be an attribute of a user-defined type. If you enqueue a SYS.AnyData-wrapped LCR in a SYS.AnyData queue, the LCR can be applied automatically by an apply process. However, because this LCR is not a captured LCR, you will need to create an apply process with the apply parameter apply_captured set to FALSE. You cannot alter this parameter for an existing apply process by calling the DBMS_APPLY_ADM.SET_PARAMETER procedure. If you enqueue the LCR as an attribute of a user-defined type, a message handler must be defined for an apply process. The apply process dequeues the message, and the message handler extracts the LCR. The message handler can call the EXECUTE member procedure of the extracted LCR to apply the changes to the shared table. Oracle Database 11g: Implement Streams

684 Dequeuing Messages in Streams
The operation of retrieving messages from a queue is known as dequeuing. User-enqueued messages in the staging queue can be dequeued automatically: Apply handles user-enqueued LCR messages in the same manner as captured LCR messages. Apply configured with a message handler dequeues and processes non-LCR messages. You can use the DBMS_STREAMS_ADM.DEQUEUE or DBMS_AQ.DEQUEUE procedures to explicitly dequeue user-enqueued messages. Dequeuing Messages in Streams To dequeue a message means to retrieve a message from a queue. User-enqueued messages can be automatically dequeued by the apply process, or they can be explicitly dequeued by an application or messaging client. Oracle Database 11g: Implement Streams

685 Unwrapping Message Payloads
Use the GET<data type> static functions of the SYS.AnyData type, where <data type> is the type of object to unwrap. Use the GETTYPE or GETTYPENAME static functions of the SYS.AnyData type to determine the type of embedded data. ... IF (evt.GetType() = DBMS_TYPE.TYPECODE_VARCHAR2) THEN num_var := evt.GetVarchar2(string_var); ELSE IF (evt.GetTypeName()='SYS.LCR$_ROW_RECORD') THEN num_var := evt.GetObject(rowLCR); Unwrapping Message Payloads The GETTYPENAME function gets the fully qualified type name for the AnyData data. If it is based on a built-in type, the type name is returned, such as NUMBER. If it is based on a user-defined type, the value <schema_name>.<type_name> is returned, such as OE.ORDER_EVENT_TYP. The GETTYPE function gets the typecode of the AnyData (listed in DBMS_TYPES). After you know the data type of the embedded data, you can use the GET<data type> functions to get the current data value from the SYS.AnyData object instance. To retrieve the embedded data for a SYS.AnyData instance, call the appropriate GET<data type> function. When you call the function, you must supply an OUT variable of the specified data type. The function returns a PLS_INTEGER value, which is relevant only if you are accessing the SYS.AnyData in a piecewise fashion (meaning that you have invoked the SYS.AnyData PIECEWISE member procedure before calling any GET* function). Here are some examples: VARCHAR2 retval := evt.GETVarchar2(string_var); NUMBER retval := evt.GETNumber(num_var); User-defined type retval := evt.GETObject(object_typ_var); Oracle Database 11g: Implement Streams

686 Message Handlers for User Messages
User messages (non-LCR messages) are always processed by a message handler. The message handler must perform any needed dependency checking. There can be only a single message handler per apply process. Non-LCR message Message handler Message Handler for User Messages A user-enqueued message that does not contain an LCR is always processed by the message handler specified for an apply process. A message handler is a user-defined procedure that can process non-LCR user messages in a customized way for your environment. For example, a Web application may enqueue a non-LCR message into the Streams queue, such as information from an online order. In this case, the user message may contain attributes that you would expect for an order, such as the item ID, the quantity ordered, and the time it is to be shipped. A message handler could insert this application data into one or more tables, after performing checks on the data or transforming the information. The message handler offers advantages in any environment that has applications that need to update one or more remote databases or perform some other remote action. These applications can enqueue user messages into a queue at the local database, and Streams can propagate each user message to the appropriate queues at destination databases. A Streams apply process always assumes that a non-LCR message has no dependencies on any other messages in the queue. If dependencies exist between user-enqueued messages and the apply process parallelism is greater than one, you must design your message handler to evaluate and resolve these dependencies. If no message handler exists, the non-LCR messages are placed in the error queue by the apply process. Oracle Database 11g: Implement Streams

687 Oracle Database 11g: Implement Streams 18 - 687
Message Handlers A message handler is a user-defined procedure that can process non-LCR user messages in a customized way for your environment. You can specify a message handler for an apply process by using the message_handler parameter in the DBMS_APPLY_ADM package: CREATE_APPLY ALTER_APPLY The apply process handles the dequeue operation. If dependencies exist between these messages in your environment, set parallelism to 1 for the apply process. Message Handlers The message handler offers advantages in any environment that has applications that need to update one or more remote databases or perform some other remote action. These applications can enqueue user messages into a queue at the local database, and Streams can propagate each user message to the appropriate queues at destination databases. If there are multiple destinations, Streams provides the infrastructure for automatic propagation and processing of these messages at these destinations. For example, a message handler may format a user message into an electronic mail message. In this case, the user message may contain the attributes you would expect in an electronic mail message, such as from, to, subject, text_of_message, and so on. A message handler could convert these user messages into electronic mail messages and send them out through an electronic mail gateway. A Streams apply process always assumes that a non-LCR message has no dependencies on any other messages in the queue. Therefore, if dependencies exist between these messages in your environment, it is recommended that you set apply process parallelism to 1. Oracle Database 11g: Implement Streams

688 Message Handler: Example
CREATE OR REPLACE PROCEDURE order_events_mh (evt SYS.AnyData) IS ord order_event_typ; dummy NUMBER; BEGIN IF (evt.getTypeName='OE.ORDER_EVENT_TYP') THEN dummy := evt.getObject(ord); CASE WHEN ord.shipdays = 0 THEN <route billing>; WHEN ord.shipdays <= 3 THEN <route priority>; WHEN ord.shipdays > 3 THEN <route normal>; END CASE; INSERT INTO oe.order_log VALUES (ord.order_number,ord.part_number,SYSTIMESTAMP); END IF; END; Message Handler: Example The example in the slide shows part of the code for a message handler that uses a user-defined object type, ORDER_EVENT_TYP. In the message handler procedure, the message is passed in as an argument. The name of the object type is fetched from the message by using the GetTypeName member method procedure of the SYS.AnyData type. A message handler can process more than one type of encapsulated message. If the order has already been shipped, route the order to the accounts payable department. If the order needs to be shipped within three days, route the order to the priority shipping queue. If there are more than three days before the order needs to be shipped, route the order to the normal shipping queue. Each order is also logged in the local OE.ORDER_LOG table. The ORDER_EVENT_TYP object used in this example has the following structure: CREATE OR REPLACE TYPE order_event_typ AS OBJECT ( order_number NUMBER(10), part_number VARCHAR2(10), quantity VARCHAR2(10), customer_id VARCHAR2(6), shipdays NUMBER ); Oracle Database 11g: Implement Streams

689 Using a Message Handler
Create an apply process that is associated with the destination queue. Configure the message handler for the apply process. Create at least one rule for the apply process relating to the user-enqueued message, and place it in the positive or negative rule set. Start the apply process. Using a Message Handler The apply process always uses a message handler to process user-enqueued messages that are not LCR messages. For the propagation of a non-LCR message, you must add a rule (that evaluates to TRUE) to each propagation between the source queue and the destination queue. When the message reaches the destination queue and if it satisfies the rule conditions for the apply process, it is automatically dequeued and passed to the message handler. If a message handler does not exist, the message is placed in the error queue. Oracle Database 11g: Implement Streams

690 Implementing a Message Handler
Use the message_handler parameter of one of the following: DBMS_APPLY_ADM.CREATE_APPLY DBMS_APPLY_ADM.ALTER_APPLY EXECUTE DBMS_APPLY_ADM.create_apply ( - queue_name =>'ix.streams_queue', - apply_name =>'apply_site1_msg', - rule_set_name =>'ix.order_app_rs', - message_handler =>'oe.order_events_mh'); Implementing a Message Handler In the steps shown in the slide, a previously created message handler is associated with the apply process APPLY_SITE1_MSG. The ORDER_EVENTS_MH procedure inserts a row into the OE.ORDER_LOG table. Using apply forwarding, you could capture the INSERTs generated by the ORDER_EVENTS_MH procedure and send them to another database. To capture these changes you can do one of the following: Remove the apply tag for all redo generated by this apply process and then create a table-level capture rule and propagation rule. Specify that messages with non-NULL tags should also be captured and propagated to the destination sites. This may involve customizing the capture rules to avoid recapturing captured changes that have been applied by other apply processes. You must also add code to the ORDER_EVENTS_MH procedure to set a specific apply tag for the redo generated by the procedure. Add code to the message handler to create an LCR and enqueue it into a SYS.AnyData queue. You remove the message handler for an apply process by setting the remove_message_handler parameter to TRUE in DBMS_APPLY_ADM .ALTER_APPLY. Oracle Database 11g: Implement Streams

691 Configuring Message Notification
EXECUTE - DBMS_STREAMS_ADM.SET_MESSAGE_NOTIFICATION ( - streams_name => 'ORDERS_DEQ', - notification_action - notification_type => 'MAIL', - include_notification => true, - queue_name => 'IX.STREAMS_QUEUE'); Notification Configuring Message Notification The SET_MESSAGE_NOTIFICATION procedure sets a notification for messages dequeued by a specified Streams messaging client from a specific queue. You can: Specify an address to which message notifications are sent Register a procedure to be invoked on a notification Register an HTTP URL to which the notification is posted The include_notification parameter enables and disables notification for the specified queue. If set to TRUE, the notification is added for the specified streams_name and streams_queue. If set to FALSE, the notification is removed for the specified streams_name and streams_queue. Messaging client Oracle Database 11g: Implement Streams

692 Supported Notification Types
For URL notifications, specify a URL without the prefix For PL/SQL procedure notifications, specify a PL/SQL procedure name: For notifications, specify an address: notification_type => 'HTTP' notification_action => ' notification_type => 'PROCEDURE' notification_action=>'oe.notify_addr_arrival' notification_type => 'MAIL' notification_action => Supported Notification Types If you register for notifications, use the DBMS_AQELM package to set the host name and port name for the Simple Mail Transfer Protocol (SMTP) server that is used by the database to send notifications. If required, specify a value to be used for the sent-from field in all the notifications that are sent by the database. You need a Java-enabled database to use this feature. Here is an example: BEGIN DBMS_AQELM.SET_MAILHOST('rgmstsmtp.company_mailsvr.com'); DBMS_AQELM.SET_MAILPORT(25); END; / If you register for HTTP notifications, you can use the DBMS_AQELM package to set the host name and port number for the proxy server and a list of no-proxy domains that is used by the database to post HTTP notifications. Oracle Database 11g: Implement Streams

693 Oracle Database 11g: Implement Streams 18 - 693
Supported Notification Types (continued) Here is an example: BEGIN DBMS_AQELM.SET_PROXY( proxy => 'www-proxy.company.com:80', no_proxy_domains => 'corp.company.com,hq.company.com:80'); END; / For more information about using the DBMS_AQELM package, see the Oracle Streams Advanced Queuing User’s Guide and Reference. Oracle Database 11g: Implement Streams

694 Scalable Notifications
Notification server: Event monitor (EMON) on originating instance Multi-process server: E000, E001, E002,  E003, and E004 Autotuned memory in the Streams pool Scalable with a large number of system-wide simultaneous notifications Increased throughput of simultaneous notifications Scalable Notifications In earlier releases, the notification server (EMON) was a single threaded, single process. In Oracle Database 11g, EMON is a multiprocess server, and memory required for notifications is autotuned in the Streams pool. This enables EMON to scale well with a large number of simultaneous notifications systemwide. This feature provides performance benefits by increasing the throughput of simultaneous notifications and by improving the overall performance of the RDBMS notification infrastructure significantly. The notification server (EMON) is now a multiprocess server, consisting of five processes: E000, E001, E002,  E003, and E004. The instance that originates a message performs the notification. Oracle Database 11g: Implement Streams

695 Notification Grouping by Time
Message notifications: In user-specified format Grouped by time interval with the DBMS_AQ.REGISTER procedure Started and stopped by client applications Retained for user-specified duration (if client is unavailable) Notification Grouping by Time Oracle Streams AQ provides notification grouping by time for other namespaces, such as AQ and ANONYMOUS namespaces, to provide a generalized notification grouping by time. Therefore, you have the option of specifying the predefined format in which you want to be notified at the end of the notification interval. For example, you may desire that an application must be notified only once every 10 minutes, should the result set of an interesting query change or a pertinent AQ message arrive. In such cases, the Oracle server records all messages of interest that occurred in the interim time period and publish the notification only after the required time interval has elapsed. At most one notification will be published in any given time interval corresponding to the user-specified time and format. Messages of interest (such as AQ message enqueues, changes to registered queries, or matches on registered text strings) that occur during the lag period are recorded in the database. At the end of the lag period, if there were any interesting messages corresponding to the registration, a notification is published. Oracle Database 11g: Implement Streams

696 Oracle Database 11g: Implement Streams 18 - 696
Notification Grouping by Time (continued) The DBMS_AQ.REGISTER package allows the following grouping attributes: Class: Only TIME is supported now. Value: Seconds Type: Summary or last (also contains count of notifications received in group) Repeat count: How many times to do grouping; default is “forever.” Start time: When to start grouping Multiple processes notification for scalability: The notification server (EMON) is now a multiprocessor server. This feature enables large numbers of simultaneous notifications systemwide. Oracle Database 11g: Implement Streams

697 Purging the Staging Queue
The DBMS_AQADM.PURGE_QUEUE_TABLE procedure: Removes messages from persistent queues Is useful for both single-consumer and multiconsumer queues Can be customized to purge only messages that meet specific conditions Purging the Staging Queue Oracle Database 10g introduced the procedural interface for purging the queue tables or queues, offering a convenient, flexible, and efficient manner to clean up queue tables. By using the PURGE_QUEUE_TABLE procedure, an Oracle Streams administrator can perform various kinds of purge operations on both single-consumer and multiconsumer queue tables. Purging unneeded messages from the staging queue helps to keep the size of the queue small and improves the performance of queue access operations. Note: Captured messages are automatically purged. It is not necessary to purge these messages explicitly. Oracle Database 11g: Implement Streams

698 Purging the Staging Queue
When purging messages from a queue, you can specify: A purge condition Whether an exclusive lock is obtained during the purge operation DECLARE purge_opt DBMS_AQADM.aq$_purge_options_t; BEGIN purge_opt.block := TRUE; DBMS_AQADM.PURGE_QUEUE_TABLE ( queue_table=>'STRMADMIN.STREAMS_QUEUE_TABLE', purge_condition=>'queue=''STREAMS_QUEUE'' ', purge_options => purge_opt); END; Purging the Staging Queue (continued) When a purge call is made, the current contents of the queue table are queried, generating a result set referred to as a snapshot. All messages in the snapshot that satisfy the purge condition are purged from the queue table. Any messages enqueued after the snapshot was taken are not added to the snapshot. This prevents the purge operation from continuing indefinitely. Concurrent queue operations can proceed unaffected while purging a queue, because the purge operation does not obtain an exclusive lock on the queue by default. The queue does not need to be stopped, and enqueue and dequeue operations on the queue are not blocked while a queue is being purged. If there is high concurrency in the queue table at the time of the purge operation, the purge call may not be able to get a consistent snapshot. If this is the case, you can call the purge operation with the block purge option set to TRUE. An exclusive lock is taken against the queue table, which prevents any enqueue or dequeue operations from taking place but allows the purge operation to succeed. Oracle Database 11g: Implement Streams

699 Purge Conditions: Examples
Purge all messages in a queue table: Purge all messages for a queue in a queue table: Purge all messages in a particular state for a queue: Purge all messages for a consumer: Purge messages based on enqueue time: purge condition: NULL purge condition: queue = 'streams_queue' purge condition: queue = 'hr_queue' and msg_state = 'READY' purge condition: consumer_name='ORDERS_DEQ' purge condition: enq_time > '30-MAR-06' Purge Conditions: Examples You can specify conditions for purging messages from a queue table as SQL conditions based on columns of the AQ$<queue_table> view. These columns are: queue msgid or propagated_msgid corr_id user_data msg_priority or msg_state delay or delay_timestamp expiration, expiration_reason, or retry_count enq_time, enq_timestamp, enq_user_id, or enq_txn_id deq_time, deq_timestamp, deq_user_id, or deq_txn_id exception_queue_owner or exception_queue sender_name, sender_address, or sender_protocol original_msgid, original_queue_name, or original_queue_owner consumer_name, address, or protocol Oracle Database 11g: Implement Streams

700 Oracle Database 11g: Implement Streams 18 - 700
Purge Conditions: Examples (continued) The purge condition that you provide is inserted in the WHERE clause that is used to select the messages to be purged. All messages that are selected by SELECT * FROM aq$<queue_table> WHERE <purge_condition>; are candidates for purging. Events can change state while the purge operation is happening. For example, the state of a message can change from READY to PROCESSED. A message that changes state between the time the purge operation starts and the time the purge operation ends is not purged, even if it satisfies the purge condition at the beginning and the end of the purge operation. For example, assume that a purge operation is called against the HR_QUEUE queue, and there are 10 messages in the queue that match the purge condition. Before the purge operation reaches it, the ninth message in the queue is dequeued. This message is not purged from the queue because it changed state during the purge operation. Oracle Database 11g: Implement Streams

701 Monitoring Streams Messaging
ALL |DBA]_APPLY ALL |DBA]_STREAMS_MESSAGE_CONSUMERS ALL |DBA]_STREAMS_MESSAGE_RULES ALL |DBA]_STREAMS_RULES Monitoring Streams Messaging The DBA_APPLY view can be queried to determine whether an apply process captures user-enqueued messages or captured messages. It can also be queried for message handler and rule set information. The DBA_STREAMS_MESSAGE_CONSUMERS view enables you to view the messaging clients defined for a queue, the rule sets being used, and the notification type and action: SELECT streams_name, rule_set_name, negative_rule_set_name, notification_type, notification_action, FROM DBA_STREAMS_MESSAGE_CONSUMERS WHERE queue_owner = 'IX' AND queue_name = 'STREAMS_QUEUE'; STREAMS_NAME RULE_SET_NAME NEGATIVE_RULE_SET_NAME NOTIFICAT NOTIFICATION_ACTION ORDERS_DEQ RULESET$_61 MAIL Oracle Database 11g: Implement Streams

702 Oracle Database 11g: Implement Streams 18 - 702
Monitoring Streams Messaging (continued) The DBA_STREAMS_MESSAGE_RULES view enables you to query all the rules that have been created with the procedures of DBMS_STREAMS_ADM: SELECT streams_name, streams_type, message_type_owner, message_type_name, rule_name, rule_condition FROM DBA_STREAMS_MESSAGE_RULES; STREAMS_NAME STREAMS_TYP MESSAGE_TYPE_OWNER MESSAGE_TYPE_NAME RULE_NAME RULE_CONDITION ORDERS_DEQ DEQUEUE OE CUST_ADDRESS_TYP RULE$_28 :"VAR$_26".country_id= 'IN' PROP_TO_SITE PROPAGATION OE CUST_ADDRESS_TYP RULE$_31 :"VAR$_26".COUNTRY_ID = 'IN' APPLY_SITE1_MSG APPLY OE CUST_ADDRESS_TYP RULE$_32 :"VAR$_26".CITY = 'ROMA' The DBA_STREAMS_RULES view enables you to query all the dequeue rules by querying for rows where the streams_type column has a value of DEQUEUE. Note The V$BUFFERED_PUBLISHERS and V$BUFFERED_SUBSCRIBERS views display information for captured message processing only. The DBA_SUBSCRIBERS view is for use with Change Data Capture. Monitoring Streams Messaging Using EM EM can also be used to monitor messaging as follows: Displaying Messages in a Queue Perform the following steps: 1. On the Streams page, click the Messaging subtab. 2. On the Messaging subtab, use the search tool to list the queue. 3. Select the queue. 4. Select Messages in the Actions list. 5. Click Go. 6. On the Messages page, use the search tool to filter the messages displayed and click Go. To display all messages in the queue, click Go without adjusting the search tool controls. 7. General information is displayed for the listed messages. Click Help for descriptions of the columns in the table. 8. To display detailed information about a message, click the message ID of the message. 9. The Message Details page displays detailed information about the message. Click Help for more information about this page. Oracle Database 11g: Implement Streams

703 Oracle Database 11g: Implement Streams 18 - 703
Monitoring Streams Messaging (continued) Displaying Messages in a Queue Table Perform the following steps: 1. On the Overview subtab on the Streams page, click the number of queue tables. 2. On the Queue Tables page, use the search tool to list the queue table. 3. Select the queue table. 4. Click Messages. 5. On the Messages page, use the search tool to filter the messages displayed and click Go. To display all messages in the queue table, click Go without adjusting the search tool controls. 6. General information is displayed for the listed messages. Click Help for descriptions of the columns in the table. 7. To display detailed information about a message, click the message ID of the message. 8. The Message Details page displays detailed information about the message. Click Help for more information about this page. Oracle Database 11g: Implement Streams

704 Oracle Database 11g: Implement Streams 18 - 704
Replicating Files SOURCE_DB DEST_DB External file Directory Streams BFILE Replicating Files The database can move any operating system file, thus enabling application developers to move and copy data that is not stored in the database. This helps applications keep data that is inside and outside the database consistent, and it provides a mechanism to provision data external to the database in a grid environment. You can propagate files between queues with Streams by enqueuing a message that contains one or more BFILE attributes. If propagation is configured between queues, Streams propagates the message and the file to the destination queue, which can be on a different database. The file can be a BFILE wrapped in a SYS.AnyData type, or you can send files as one or more BFILE attributes of a user-defined type wrapped in a SYS.AnyData type. Propagating a BFILE in Streams has the same restrictions as transmitting files by using the DBMS_FILE_TRANSFER.PUT_FILE procedure. You cannot specify files in a message payload as: One or more BFILE attributes in a VARRAY One or more BFILE attributes of a SYS.AnyData type that is an attribute of a user-defined type The DIRECTORY object and file name of each propagated BFILE are preserved, but you can map the DIRECTORY object to different directories on the source and destination databases. Oracle Database 11g: Implement Streams

705 Oracle Database 11g: Implement Streams 18 - 705
Summary In this lesson, you should have learned how to: Configure secure queue users for enqueuing and dequeuing messages Manually create and enqueue messages Configure a messaging client for dequeuing messages Explicitly dequeue messages Configure an apply process to dequeue user-created LCR messages Configure a message handler to dequeue non-LCR messages Configure message notification Oracle Database 11g: Implement Streams

706 Enqueuing and Dequeuing Messages

707 Oracle Database 11g: Implement Streams 18 - 707
Objectives After completing this lesson, you should be able to: Manually create and enqueue messages Explicitly dequeue messages Use buffered messaging for faster enqueue and dequeue operations Configure apply to automatically dequeue messages Purge messages from a queue Describe and resolve common problems for Advanced Queuing Troubleshoot secure queue access Objectives Streams enables messaging with queues of the SYS.AnyData type. These queues can stage user messages whose payloads are of the SYS.AnyData type. A SYS.AnyData payload can be a wrapper for payloads of different data types. User applications can explicitly enqueue messages into a queue. The user applications can format these user-enqueued messages as LCRs or user messages. These user-enqueued messages can be dequeued by an apply process, a messaging client, or a user application. Messages that were enqueued explicitly into a queue can be propagated to another queue or explicitly dequeued from the same queue. Streams includes the features of Oracle Streams Advanced Queuing (AQ), which supports all the standard features of message queuing systems, including: Multiconsumer queues Publish and subscribe Content-based routing Internet propagation Transformations Gateways to other messaging subsystems Oracle Database 11g: Implement Streams

708 Enqueuing a User Message in Streams
Use the DBMS_AQ.ENQUEUE procedure. Use the DBMS_STREAMS_MESSAGING.ENQUEUE procedure. Use JMS, OCI, OCCI, and so on. C OCI OCCI JMS Apply process Enqueuing a User Message in Streams User-enqueued messages can contain LCRs or any other type of message. Any user message that is explicitly enqueued by a user or an application is called a user-enqueued message. Messages that were enqueued by a user procedure called from an apply process are also user-enqueued messages. You can specify a destination queue for a rule by using the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package. If an apply process has such a rule in its positive rule set, and a message satisfies the rule, the apply process enqueues the message into the destination queue. A message that has been enqueued into a queue by using the SET_ENQUEUE_DESTINATION procedure is the same as any other user-enqueued message. Such messages can be manually dequeued, propagated to another queue, or applied by an apply process created with the apply_captured parameter set to FALSE. Oracle Database 11g: Implement Streams

709 Enqueuing a Message in Streams
DECLARE any_payld SYS.AnyData; BEGIN any_payld := SYS.AnyData.ConvertObject( ix.order_event_typ(2424, 3350, 146,'Elia', 'Fawcett', 4, TO_DATE('18-JUN-06'))); DBMS_STREAMS_MESSAGING.ENQUEUE( queue_name => 'IX.STREAMS_QUEUE', payload => any_payld); END; / Enqueuing a Message in Streams The example in the slide shows how to explicitly enqueue a message with a payload type of IX.ORDER_EVENT_TYP in a SYS.AnyData queue. Oracle Database 11g: Implement Streams

710 Creating User Messages with LCRs
To enqueue a user message that is an LCR: 1. Create the LCR by using a constructor function. 2. Use the SYS.AnyData.ConvertObject function to convert the LCR into the SYS.AnyData payload. 3. Enqueue it into the staging area. Constructor functions: SYS.LCR$_DDL_RECORD type DDL LCRs SYS.LCR$_ROW_RECORD type DML LCRs Creating User Messages with LCRs To create user messages with LCRs, you must set up a queue and add a subscriber for the queue. Then you can wrap and enqueue the LCR. The LCR can be dequeued with a messaging client. Oracle Database 11g: Implement Streams

711 LCR$_DDL_RECORD Constructor
STATIC FUNCTION CONSTRUCT( source_database_name IN VARCHAR2, command_type IN VARCHAR2, object_owner IN VARCHAR2, object_name IN VARCHAR2, object_type IN VARCHAR2, ddl_text IN CLOB, logon_user IN VARCHAR2, current_schema IN VARCHAR2, base_table_owner IN VARCHAR2, base_table_name IN VARCHAR2, tag IN RAW DEFAULT NULL, transaction_id IN VARCHAR2 DEFAULT NULL, scn IN NUMBER DEFAULT NULL) RETURN SYS.LCR$_DDL_RECORD; LCR$_DDL_RECORD Constructor This function creates a SYS.LCR$_DDL_RECORD object with the specified information: source_database_name: Database where the DDL statement occurred. If you do not include the domain name, the local domain is appended to the database name automatically. For example, if you specify SITE1 and the local domain is .NET, SITE1.NET is specified automatically. You must set this parameter to a non-NULL value. command_type: Type of command that is executed in the DDL statement. You must set this parameter to a non-NULL value. object_owner: User who owns the object on which the DDL statement was executed object_name: Database object on which the DDL statement was executed object_type: Type of object on which the DDL statement was executed. Valid types are CLUSTER, FUNCTION, INDEX, LINK (indicates a database link), OUTLINE, PACKAGE, PACKAGE BODY, PROCEDURE, SEQUENCE, SYNONYM, TABLE, TRIGGER, TYPE, USER, VIEW, and NULL. Specify NULL for all supported object types that are not listed. Oracle Database 11g: Implement Streams

712 Oracle Database 11g: Implement Streams 18 - 712
LCR$_DDL_RECORD Constructor (continued) ddl_text: Text of the DDL statement. You must set this parameter to a non-NULL value. logon_user: The user whose session executed the DDL statement current_schema: The schema that is used if no schema is specified explicitly for the modified database objects in ddl_text. If a schema is specified in ddl_text that differs from the one specified for current_schema, the schema specified in ddl_text is used. This parameter must be set to a non-NULL value. base_table_owner: If the DDL statement is a table-related DDL (such as CREATE TABLE and ALTER TABLE), or if the DDL statement involves a table (such as creating a trigger on a table), base_table_owner specifies the owner of the table that is involved. Otherwise, base_table_owner is NULL. base_table_name: If the DDL statement is a table-related DDL (such as CREATE TABLE and ALTER TABLE), or if the DDL statement involves a table (such as creating a trigger on a table), base_table_name specifies the name of the table that is involved. Otherwise, base_table_name is NULL. tag: A binary tag that enables tracking of the LCR. For example, you may use this tag to determine the original source database of the DDL statement if apply forwarding is used. transaction_id: Identifier of the transaction scn: SCN at the time that the change record for a captured LCR was written to the redo. The SCN value is meaningless for a user-created LCR. Oracle Database 11g: Implement Streams

713 LCR$_ROW_RECORD Constructor
STATIC FUNCTION CONSTRUCT( source_database_name IN VARCHAR2, command_type IN VARCHAR2, object_owner IN VARCHAR2, object_name IN VARCHAR2, tag IN RAW DEFAULT NULL, transaction_id IN VARCHAR2 DEFAULT NULL, scn IN NUMBER DEFAULT NULL, old_values IN SYS.LCR$_ROW_LIST DEFAULT NULL, new_values IN SYS.LCR$_ROW_LIST DEFAULT NULL) RETURN SYS.LCR$_ROW_RECORD; LCR$_ROW_RECORD Constructor This function creates a SYS.LCR$_ROW_RECORD object with the specified information: source_database_name: Database where the DML statement occurred. If you do not include the domain name, the local domain is appended to the database name automatically. For example, if you specify SITE1 and the local domain is .NET, SITE1.NET is specified automatically. You must set this parameter to a non-NULL value. command_type: Type of command executed in the DML statement. You must set this parameter to a non-NULL value. Valid values are INSERT, UPDATE, DELETE, LOB ERASE, LOB WRITE, and LOB TRIM. object_owner: User who owns the object on which the DML was executed object_name: Database object on which the DML was executed tag: A binary tag that enables tracking of the LCR. For example, you may use this tag to determine the original source database of the DML statement if apply forwarding is used. transaction_id: Identifier of the transaction scn: SCN at the time that the change record for a captured LCR was written to the redo. The SCN value is meaningless for a user-created LCR. Oracle Database 11g: Implement Streams

714 Oracle Database 11g: Implement Streams 18 - 714
LCR$_ROW_RECORD Constructor (continued) old_values: If the DML statement is an UPDATE or a DELETE statement, these are the values of columns in the row before the DML statement. You must set this parameter to NULL for LOB update operations. new_values: If the DML statement is an UPDATE or an INSERT statement, these are the values of columns in the row after the DML statement is applied. If the LCR reflects a LOB operation, new_values contains the supplementally logged columns and any relevant LOB information. Oracle Database 11g: Implement Streams

715 LCR$_ROW_LIST Constructor
TYPE SYS.LCR$_ROW_LIST AS TABLE OF SYS.LCR$_ROW_UNIT; TYPE LCR$_ROW_UNIT AS OBJECT ( column_name VARCHAR2(4000), data SYS.AnyData, lob_information NUMBER, lob_offset NUMBER, lob_operation_size NUMBER, long_information NUMBER); LCR$_ROW_LIST Constructor The LCR$_ROW_LIST type identifies a list of column values for a row in a table. This type uses the LCR$_ROW_UNIT type and is used in the LCR$_ROW_RECORD type. The LCR$_ROW_UNIT type identifies the value for a column in a row. For each column in a row, you need an individual call to the LCR$_ROW_UNIT constructor function to specify the column name and values: column_name: The name of the column data: The data contained in the column lob_information: Contains the LOB information for the column and contains one of the following values: DBMS_LCR.NOT_A_LOB CONSTANT NUMBER := 1; DBMS_LCR.NULL_LOB CONSTANT NUMBER := 2; DBMS_LCR.INLINE_LOB CONSTANT NUMBER := 3; DBMS_LCR.EMPTY_LOB CONSTANT NUMBER := 4; DBMS_LCR.LOB_CHUNK CONSTANT NUMBER := 5; DBMS_LCR.LAST_LOB_CHUNK CONSTANT NUMBER := 6; lob_offset: The LOB offset that is specified in the number of characters for CLOB columns and the number of bytes for BLOB columns. Valid values are NULL or a positive integer less than or equal to DBMS_LOB.LOBMAXSIZE. Oracle Database 11g: Implement Streams

716 Oracle Database 11g: Implement Streams 18 - 716
LCR$_ROW_LIST Constructor (continued) lob_operation_size: If lob_information for the LOB is DBMS_LCR.LAST_LOB_CHUNK, this can be set to either a valid LOB ERASE value or a valid LOB TRIM value. A LOB_ERASE value must be a positive integer less than or equal to DBMS_LOB.LOBMAXSIZE. A LOB_TRIM value must be a nonnegative integer less than or equal to DBMS_LOB.LOBMAXSIZE. If lob_information is not DBMS_LCR.LAST_LOB_CHUNK, and for all other operations, lob_operation_size is NULL. long_information: Contains the LONG information for the column and contains one of the following values: DBMS_LCR.not_a_long CONSTANT NUMBER := 1; DBMS_LCR.null_long CONSTANT NUMBER := 2; DBMS_LCR.inline_long CONSTANT NUMBER := 3; DBMS_LCR.long_chunk CONSTANT NUMBER := 4; DBMS_LCR.last_long_chunk CONSTANT NUMBER := 5; Oracle Database 11g: Implement Streams

717 Specifying Column Values for an LCR
DECLARE col1 SYS.LCR$_ROW_UNIT; col2 SYS.LCR$_ROW_UNIT; ... newvals SYS.LCR$_ROW_LIST; BEGIN col1 := SYS.LCR$_ROW_UNIT( 'ORDER_ID', SYS.AnyData.ConvertNumber(2502), DBMS_LCR.NOT_A_LOB, NULL, NULL); col2 := SYS.LCR$_ROW_UNIT('ORDER_DATE', SYS.AnyData.ConvertTimestampLTZ('04-NOV-00'), newvals := SYS.LCR$_ROW_LIST(col1,col2,...); CONSTRUCT_ROW_LCR('SITE1.NET', 'INSERT', 'OE', 'ORDERS', NULL, newvals); END; Specifying Column Values for an LCR In the example in the slide, an anonymous PL/SQL block is used to construct the row LCR. For each column in the table, an LCR$_ROW_UNIT variable is created and populated. These column variables are then used to create the LCR$_ROW_LIST object, which is passed to the CONSTRUCT_ROW_LCR procedure (shown in the next slide) as an argument. Because this example constructs only the newvals list, this LCR represents an INSERT operation. For a DELETE operation, you must create an oldvals list. For an UPDATE, you must create both an oldvals list and a newvals list, as shown here: DECLARE oldunit sys.lcr$_row_unit; newunit sys.lcr$_row_unit; oldvals sys.lcr$_row_list; newvals sys.lcr$_row_list; BEGIN oldunit := SYS.LCR$_ROW_UNIT('COL1', SYS.AnyData.convertCHAR('oldval_1'), DBMS_LCR.NOT_A_LOB,NULL,NULL); Oracle Database 11g: Implement Streams

718 Oracle Database 11g: Implement Streams 18 - 718
Specifying Column Values for an LCR (continued) newunit := SYS.LCR$_ROW_UNIT('COL1', SYS.AnyData.convertVARCHAR2('newval_1'), DBMS_LCR.NOT_A_LOB,NULL,NULL); oldvals := SYS.LCR$_ROW_LIST(oldunit); newvals := SYS.LCR$_ROW_LIST(newunit); CONSTRUCT_ROW_LCR( source_dbname => 'SITE1.NET', cmd_type => 'UPDATE', owner => 'DEMO', name => 'TEST', old_vals => oldvals, new_vals => newvals); END; / Oracle Database 11g: Implement Streams

719 Oracle Database 11g: Implement Streams 18 - 719
Creating an LCR CREATE OR REPLACE PROCEDURE construct_row_lcr( source_dbname VARCHAR2, cmd_type VARCHAR2, owner VARCHAR2, name VARCHAR2, old_vals SYS.LCR$_ROW_LIST, new_vals SYS.LCR$_ROW_LIST) AS row_lcr SYS.LCR$_ROW_RECORD; BEGIN -- Construct the LCR based on IN variables... row_lcr := SYS.LCR$_ROW_RECORD.CONSTRUCT( source_database_name => source_dbname, command_type => cmd_type, object_owner => owner, object_name => name, old_values => old_vals,new_values => new_vals); enqueue_lcr (row_lcr); END; / Creating an LCR To create an LCR, simply specify a value for each attribute of the LCR type. In the example in the slide, you specify values for all the attributes of a row LCR. These values were passed to the procedure. After the LCR is created, you can wrap the LCR in the SYS.AnyData type and enqueue it in the staging area. You can also pass the LCR to another procedure to perform the enqueue operation (as shown in the slide). The information needed to populate the LCR$_ROW_RECORD type attributes is passed in to the CONSTRUCT_ROW_LCR procedure. The LCR$_ROW_LIST types were created in the previous example. Other methods for obtaining the column data values for a DML LCR include: Using a procedure that selects the current data from a table to create the OLD_VALUES attribute and prompts the user for new column values to create the NEW_VALUES attribute Implementing a trigger on a table, and creating the OLD_VALUES and NEW_VALUES attributes from the OLD and NEW values of the trigger Oracle Database 11g: Implement Streams

720 Enqueuing a User-Created LCR
CREATE OR REPLACE PROCEDURE enqueue_lcr ( row_lcr SYS.LCR$_ROW_RECORD) AS eopt DBMS_AQ.ENQUEUE_OPTIONS_T; mprop DBMS_AQ.MESSAGE_PROPERTIES_T; enq_msgid RAW(16); BEGIN mprop.SENDER_ID := SYS.AQ$_AGENT('strmadmin',NULL,NULL); DBMS_AQ.ENQUEUE( queue_name => 'ix.streams_queue', enqueue_options => eopt, message_properties => mprop, payload => SYS.AnyData.ConvertObject(row_lcr), msgid => enq_msgid); END; / Enqueuing a User-Created LCR After all the attribute values have been specified, you wrap the LCR into a SYS.AnyData type and enqueue it into a staging queue. In this example, the row LCR is passed into the ENQUEUE_LCR procedure as an argument, and the procedure then calls DBMS_AQ.ENQUEUE to enqueue the LCR into the specified queue. Oracle Database 11g: Implement Streams

721 Enqueuing a User-Created LCR
CREATE OR REPLACE PROCEDURE enqueue_lcr ( row_lcr SYS.LCR$_ROW_RECORD) AS any_payld SYS.AnyData; BEGIN any_payld := SYS.AnyData.ConvertObject(row_lcr); DBMS_STREAMS_MESSAGING.ENQUEUE( queue_name => 'IX.STREAMS_QUEUE', payload => any_payld); COMMIT; END; / Enqueuing a User-Created LCR (continued) In place of the traditional DBMS_AQ procedural interface, you can also use the DBMS_STREAMS_MESSAGING package to easily enqueue and dequeue messages in staging queues. The following conditions must be satisfied before you use this procedure: The current database must contain the specified queue, and the queue must be a secure queue of the SYS.AnyData type. The current user must be mapped to a unique AQ agent with the same name as the current user. Each message enqueued must have a subscriber, and the subscriber must be defined before the message is enqueued. Oracle Database 11g: Implement Streams

722 Explicit Dequeue of User Messages
Use the DBMS_AQ.DEQUEUE procedure to manually dequeue messages. The user who dequeues the messages from the SYS.AnyData queue must: Be either a recipient who is specified during enqueue or a subscriber to the queue Be configured as a secure agent Know the specified consumer for the message before dequeuing Use a subscriber rule to specify which messages should be dequeued. Explicit Dequeue of User Messages To explicitly dequeue messages, you must know the consumer of the messages. To find the consumer for the messages in a queue, connect as the owner of the queue and query AQ$<queue_table_name>, where queue_table_name is the name of the queue table. For example, to find the consumers of the messages in the streams_queue queue, run the following query as the STRMADMIN user: SELECT msg_id, msg_state, consumer_name FROM AQ$STREAMS_QUEUE_TABLE; To explicitly dequeue messages, create a procedure that uses the DBMS_AQ.DEQUEUE procedure. If you are dequeuing from a SYS.AnyData type queue, you must unwrap the SYS.AnyData type. If the dequeued message is an LCR, you can use the EXECUTE member method to explicitly apply the LCR to the target database object. The publish/subscribe capability of Oracle Streams allows a publisher application to enqueue messages to a queue anonymously (no recipients specified). Users define a rule-based subscription for a given queue to specify interest in receiving specific messages. Rules based on message properties and message data are used to identify specific messages. Subscriber rules are then used to evaluate recipients for message delivery of the enqueued messages. Subscriber applications can then use the DBMS_AQ.DEQUEUE procedure to retrieve messages that match the subscription criteria. Oracle Database 11g: Implement Streams

723 Explicit Dequeue: Example
CREATE OR REPLACE PROCEDURE explicit_dq (consumer IN VARCHAR2) AS dequeue_options DBMS_AQ.DEQUEUE_OPTIONS_T; message_prop DBMS_AQ.MESSAGE_PROPERTIES_T; message_handle RAW(16); message SYS.AnyData; BEGIN dequeue_options.consumer_name := consumer; DBMS_AQ.DEQUEUE( queue_name => 'ix.streams_queue', dequeue_options => dequeue_options, message_properties => message_prop, payload => message, msgid => message_handle); COMMIT; END; / Explicit Dequeue: Example The example in the slide creates a procedure that takes as input the consumer of the messages that you want to dequeue. The procedure then dequeues a message for that consumer of the SYS.AnyData type from a queue named STREAMS_QUEUE in the IX schema. None of the default dequeue options or default message properties, other than the consumer name, are changed. This procedure commits after the dequeue of the message. The COMMIT informs the queue that the dequeued messages have been consumed successfully by this subscriber. Note: For more information about the DBMS_AQ package, the DEQUEUE procedure, rule-based subscriptions, dequeue options, and message properties, see the following Oracle documentation: Oracle Streams Advanced Queuing User’s Guide and Reference PL/SQL Packages and Types Reference Oracle Database 11g: Implement Streams

724 Configuring Propagation of Non-LCR Messages
BEGIN DBMS_STREAMS_ADM.ADD_MESSAGE_PROPAGATION_RULE ( message_type =>'IX.ORDER_EVENT_TYP', rule_condition =>':MSG.ORDER_STATUS < 6', streams_name =>'PROP_TO_SITE2', source_queue_name =>'IX.STREAMS_QUEUE', destination_queue_name => END; / Configuring Propagation of Non-LCR Messages You can use the ADD_MESSAGE_PROPAGATION_RULE procedure to add a message rule to either the positive rule set or the negative rule set of an existing propagation, or to create a new propagation if one does not already exist. The ADD_MESSAGE_PROPAGATION_RULE procedure uses the information that you supply to create a new evaluation context, and associates the evaluation context with the new rule. The evaluation context has a system-generated name. If you are using Oracle9i Release 2, you must manually create the necessary evaluation context and custom rule condition. The ADD_MESSAGE_PROPAGATION_RULE procedure also configures propagation by using the current user and establishes a default propagation schedule. If no propagation job exists for the database link that is specified in the destination_queue_name parameter when this procedure is run, a propagation job is created for use by the propagation. If a propagation job is created, the user who runs this procedure owns the propagation job. If a propagation job already exists for the specified database link, the propagation uses the existing propagation job. Oracle Database 11g: Implement Streams

725 Queue-to-Queue Propagation
Fine-grained control of propagation Service based in RAC environments CREATE DATABASE LINK RACDB_GLOBAL_NAME … USING 'CONNECT_DATA=(service_name=RACDB_GLOBAL_NAME)'; Queue-to-Queue Propagation Before Oracle Database 10g Release 2, Oracle Streams AQ supported only queue-to-database link propagation. Now, Oracle Streams AQ has added queue-to-queue propagation. A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue. In a Real Application Clusters (RAC) environment, when the destination queue in a queue-to-queue propagation is a buffered queue, the queue-to-queue propagation uses a service for transparent failover to another instance if the primary RAC instance fails. You can implement queue-to-queue propagation by using one of the following methods: Specifying the queue_to_queue parameter when using DBMS_AQADM.ADD_SUBSCRIBER: PROCEDURE add_subscriber( queue_name IN VARCHAR2, subscriber IN SYS.AQ$_AGENT, rule IN VARCHAR2 DEFAULT NULL, transformation IN VARCHAR2 DEFAULT NULL, queue_to_queue IN BOOLEAN DEFAULT FALSE, delivery_mode IN PLS_INTEGER DEFAULT DBMS_AQADM.PERSISTENT) Oracle Database 11g: Implement Streams

726 Oracle Database 11g: Implement Streams 18 - 726
Queue-to-Queue Propagation (continued) Setting the DESTINATION_QUEUE parameter when using the DBMS_AQADM. SCHEDULE_PROPAGATION procedure: PROCEDURE schedule_propagation( queue_name IN VARCHAR2, destination IN VARCHAR2 DEFAULT NULL, start_time IN DATE DEFAULT SYSDATE, duration IN NUMBER DEFAULT NULL, next_time IN VARCHAR2 DEFAULT NULL, latency IN NUMBER DEFAULT 60, destination_queue IN VARCHAR2 DEFAULT NULL) The value specified for destination in this procedure is a database link and the value specified for destination_queue is the name of the target queue to which messages are to be propagated. Using the ADD_*_PROPAGATION_RULES procedures of the DBMS_STREAMS_ADM package and setting the queue_to_queue parameter to TRUE You can view queue-to-queue propagation information by querying the DBA_PROPAGATION view. The database link that you create for queue-to-queue propagation uses the service name for the RAC database that is equal to the global database name for the RAC database. The global name of the RAC database must be the same as <DB_NAME>.<DB_DOMAIN>. If this is not the case, you must start the service name used in the connection string as a service. Interoperability Notes Any existing propagation schedules continue to work as queue-to-database link propagations following an upgrade to Oracle Database 10g, Release 2. Queue-to-queue propagation can be created only in a database that has compatibility set to or higher. If you use an Oracle Database version or as a source database, propagation continues to function by using queue-to-database link propagation. If you use an Oracle Database 10g, Release 2 database as the source database and propagate changes to an earlier version of Oracle Database, only queue-to-database link propagation is supported. To implement such a propagation, you must specify queue_to_queue => FALSE when creating propagation rules. If the destination database is up when you create the propagation schedule, the target database compatibility is determined automatically, and an error is raised appropriately during create propagation, if necessary. If the destination database is down when you create the propagation schedule, the compatibility check is deferred until propagation run time. If, at that time, it is discovered that queue-to-queue propagation has been specified for an Oracle Database of version or earlier, an “unsupported” error is raised and the existing messages in the queue must be manually migrated. Oracle Database 11g: Implement Streams

727 Dequeuing LCRs with DBMS_STREAMS_MESSAGING
DECLARE any_payld SYS.AnyData; row_lcr SYS.LCR$_ROW_RECORD; type_name VARCHAR2(61); num_var PLS_INTEGER; BEGIN DBMS_STREAMS_MESSAGING.DEQUEUE( queue_name => 'ix.streams_queue', streams_name =>'orders_deq', payload=>any_payld, navigation => 'NEXT MESSAGE'); type_name := any_payld.GetTypeName; IF type_name = 'SYS.LCR$_ROW_RECORD' THEN num_var := any_payld.GETOBJECT(row_lcr); DBMS_OUTPUT.PUT_LINE('Successfully retrieved LCR for ' || row_lcr.GET_OBJECT_NAME); ... Dequeuing LCRs with DBMS_STREAMS_MESSAGING The DEQUEUE procedure of DBMS_STREAMS_MESSAGING uses the specified Streams messaging client to dequeue an explicitly enqueued message from the specified queue. The example shown in the slide dequeues the next message available for dequeue from STREAMS_QUEUE in the IX schema. The messaging client is ORDERS_DEQ. After the payload has been retrieved, the embedded data is checked to see whether it is a row LCR. If so, the row LCR is extracted using the SYS.AnyData member method GetObject, and a message is printed confirming receipt of the LCR and the table that it applies to. The messaging client is specified with the streams_name parameter. For the DEQUEUE procedure to work, the specified client must have been previously configured with the privileges to access the queue. On the previous page, the messaging client was automatically configured with the DBMS_STREAMS_ADM.ADD_MESSAGE_RULE procedure. The streams_name parameter can be set to NULL only if there is a single relevant messaging client that can be used. If there are multiple relevant messaging clients, or no relevant messaging client, an error is raised and no messages are dequeued. When dequeuing messages, you can specify a dequeue mode (which specifies the actions to take when removing messages from the queue) and a navigation mode (which specifies the position of the message that is retrieved). Oracle Database 11g: Implement Streams

728 Oracle Database 11g: Implement Streams 18 - 728
Dequeuing LCRs with DBMS_STREAMS_MESSAGING (continued) You can specify one of the following values for the dequeue_mode parameter: REMOVE LOCKED BROWSE The navigation parameter can be set to: NEXT MESSAGE NEXT TRANSACTION FIRST MESSAGE Note: For more information about dequeue modes and navigation concepts, see the Oracle Streams Advanced Queuing User’s Guide and Reference. Oracle Database 11g: Implement Streams

729 Dequeuing Messages with DBMS_STREAMS_MESSAGING
DECLARE any_payld SYS.AnyData; order ix.order_event_typ; ... BEGIN DBMS_STREAMS_MESSAGING.DEQUEUE( queue_name => 'ix.streams_queue', streams_name =>'orders_deq', payload=>any_payld, navigation => 'FIRST MESSAGE'); type_name := any_payld.GetTypeName; IF type_name = 'ix.order_event_typ' THEN num_var := any_payld.GETOBJECT(order); DBMS_OUTPUT.PUT_LINE('Successfully retrieved order ' || order.order_id); END IF; END; Dequeuing Messages with DBMS_STREAMS_MESSAGING The example shown in the slide dequeues the next message available for dequeue from STREAMS_QUEUE in the IX schema. The messaging client is ORDERS_DEQ. After the payload has been retrieved, the object type is extracted using the SYS.AnyData member method GetObject, and a message is printed confirming receipt of the correct data. The messaging client is specified with the streams_name parameter. The default dequeue mode used is REMOVE, which means that the message is read and then deleted from the queue. The navigation mode is set to FIRST MESSAGE, which resets the position to the beginning of the queue, and then retrieves the first available message and matches the search criteria. Oracle Database 11g: Implement Streams

730 Applying User-Created LCR Messages
At the destination site, create an apply process that is associated with the destination queue. Create rules for the apply process that match the user-enqueued LCR metadata. EXECUTE DBMS_APPLY_ADM.CREATE_APPLY(- queue_name => 'IX.STREAMS_QUEUE', - apply_name => 'APPLY_SITE1_LCRS', - apply_user => 'IX', - apply_captured => FALSE); EXECUTE DBMS_STREAMS_ADM.ADD_TABLE_RULES(- 'OE.ORDERS', 'APPLY', 'APPLY_SITE1_LCRS', - 'IX.STREAMS_QUEUE'); Applying User-Created LCR Messages If you explicitly enqueue a SYS.AnyData-wrapped LCR, the LCR can be applied automatically by an apply process. However, because this LCR is not a captured LCR, you must create an apply process with the apply_captured apply parameter set to FALSE. You cannot alter the apply process to capture user-enqueued messages with the DBMS_APPLY_ADM.SET_PARAMETER procedure. After a user-created LCR has been enqueued, it is propagated through the stream in a similar way as a captured LCR message. When the LCR reaches a destination queue, if it satisfies the rule conditions for a properly configured apply process, it is automatically dequeued and executed. You can specify table, schema, or global rules for the apply process. You can also configure the apply process to use an apply handler. If you enqueue the LCR as an attribute of a user-defined type in a SYS.AnyData queue, a message handler must be defined for an apply process. The apply process dequeues the message, and the message handler extracts the LCR. The message handler can call the EXECUTE member procedure of the extracted LCR, which results in the changes being applied to the shared table. Oracle Database 11g: Implement Streams

731 Oracle Database 11g: Implement Streams 18 - 731
Applying User-Created LCR Messages (continued) When you configure the rule-based apply of a user message, you do not use the source database as part of the system-created rule condition. If the streams_type parameter is set to DEQUEUE, the ADD_MESSAGE_RULE procedure creates a dequeue rule for the specified messaging client. The specified user must have the necessary privileges to perform these actions. You specify a messaging client by providing the client name as the value for the streams_name parameter. Oracle Database 11g: Implement Streams

732 Oracle Database 11g: Implement Streams 18 - 732
Buffered Messaging ENQUEUE Spill Instance SGA Shared pool Java pool Large pool Streams pool Buffered Persistent Buffered Messaging The Oracle Database provides a persistent queuing implementation, Oracle Streams AQ. Oracle Streams AQ implements persistent message queues by using database tables and index-organized tables (IOTs). A lightweight messaging system named buffered messaging is available that stores messages in shared memory. Buffered messages have all the properties of persistent Oracle Streams AQ messages except for persistence. Buffered message enqueue and dequeue operations are much faster than persistent message operations because, optimally, there is no disk access. Also, unlike persistent message operations, no redo is written to disk. Because shared memory is limited, there are times when buffered messages may have to be spilled to disk to free memory for newer messages. The spill is automatic. In earlier database versions, flow control may also have to be enabled to prevent applications from flooding the shared memory at times when the message consumers are slow or have stopped for some reason. Oracle Database 11g: Implement Streams

733 Buffered Messaging Flow Control
Instance SGA Shared pool Java pool Large pool Streams pool Buffered enqueue Spill Buffered Messaging Flow Control Because memory is limited, it is necessary to manage memory use for optimal performance. The memory management scheme for buffered messaging tries to keep the system in a state where disk operations are minimized. It is recommended that you configure a Streams pool when using buffered messages. The Automatic Database Diagnostic Monitor (ADDM) produces advisories to increase the Streams pool if the maximum allowed size is not sufficient. A flow control scheme prevents applications from flooding the shared memory with messages. If the number of unread messages enqueued by an application (identified by enqueue_option, sender_name) exceeds a system-determined limit, the application is blocked from enqueuing more messages until one of the subscribers has read some of its messages. Flow control also imposes a limit on the size of the message that can be kept in memory. If a message exceeds this size, it is spilled immediately to disk. Oracle Database 11g: Implement Streams

734 Oracle Database 11g: Implement Streams 18 - 734
Buffered Messaging Flow Control (continued) Even with flow control implemented, slower consumers of a multiconsumer queue may cause the buffered messages in a queue to grow unboundedly. In such a situation, if the total memory consumption of buffered messages starts approaching the available shared memory limit, older messages are spilled to disk to free up memory. For priority-ordered queues, “older” means messages of lower priority. This method ensures that the cost of disk access for buffered messages that are no longer in memory is paid by the slower consumers, and faster subscribers can proceed unhindered. Applications can also use expiration to manage memory, by configuring the system so that older messages are deleted, thus freeing up the shared memory for newer messages. For example, if you are enqueuing news reports on a developing situation, and posting updates every 30 minutes, you can set the expiration time of the messages enqueued to 30 minutes. If the system is running out of shared memory, the older messages expire, making room for the more recent news articles. Note: Even when some messages are spilled to disk, access to messages in memory is not affected. The indexes that maintain the order of messages are always in memory, so the order between buffered messages on disk and in-memory is maintained. Oracle Database 11g: Implement Streams

735 Buffered Messaging Support
SGA1 SGA2 SGA3 RACDB1 RACDB2 RACDB3 Buffered enqueue Buffered Messaging Support Your application can enqueue and dequeue buffered messages from any RAC instance, as long as it uses password-based authentication to connect to the database. The in-memory structures used to implement buffered messages are implemented on one RAC instance. Enqueue or dequeue requests received at other instances are forwarded to this instance over the interconnect. A service name is associated with each queue in RAC. This service name always points to the instance with the most efficient access for buffered messaging so that interinstance communication (pinging) is minimized. Oracle Call Interface (OCI) clients can use this service name to connect to the instance for buffered messaging operations on the queue. If you use queue-to-queue propagation, the database link for propagation always connects to the instance with the most optimal access to the queue for buffered messaging. The catalog views DBA_QUEUE_TABLES and USER_QUEUE_TABLES display the service name associated with the queue. The GV$SERVICES view displays the active services for a RAC database, including the queue service. Oracle Database 11g: Implement Streams

736 Enqueuing Buffered Messages by Using PL/SQL
DECLARE enq_opt DBMS_AQ.enqueue_options_t; mesg_prop DBMS_AQ.message_properties_t; mesg_hnd RAW(16); BEGIN enq_opt.visibility := 'IMMEDIATE'; enq_opt.delivery_mode := DBMS_AQ.BUFFERED; DBMS_AQ.ENQUEUE( queue_name => 'IX.test_queue', enq_opt => enqueue_options, mesg_prop => message_properties, payload => 'Testing buffered messaging…', msgid => mesg_hnd); COMMIT; END; / Enqueuing Buffered Messages by Using PL/SQL Buffered and persistent messages use the same single-consumer or multiconsumer queues and the same administrative and operational interfaces. They are distinguished from each other by a delivery mode parameter, which is set by the application when enqueuing the message to an Oracle Streams AQ queue. Buffered messages are supported only for queues that are created in an Oracle Database with COMPATABILE set to or higher. Propagation of buffered messages between 10.2-compatible queues and queues of lesser compatibility is not supported. You can specify a delivery_mode of BUFFERED in the following PL/SQL procedures and types: DBMS_AQADM.ADD_SUBSCRIBER SYS.AQ$_PURGE_OPTIONS_T SYS.MESSAGE_PROPERTIES_T SYS.ENQUEUE_OPTIONS_T SYS.DEQUEUE_OPTIONS_T DBMS_AQ.LISTEN Buffered messages can be queried by using the AQ$<Queue_Table_Name> view. They appear with the BUFFERED-IN-MEMORY or BUFFERED-SPILLED states. Oracle Database 11g: Implement Streams

737 Oracle Database 11g: Implement Streams 18 - 737
Enqueuing Buffered Messages by Using PL/SQL (continued) Refer to the Oracle Streams Advanced Queuing User’s Guide and Reference for examples of using the following procedures with buffered messages: DBMS_AQADM.CREATE_SUBSCRIBER DBMS_AQADM.PURGE_QUEUE_TABLE DBMS_AQ.DEQUEUE DBMS_AQ.REGISTER DBMS_AQ.LISTEN Oracle Database 11g: Implement Streams

738 Debugging Oracle Streams AQ Propagation Problems
Check the existence of an enabled propagation schedule. Check the assigned job queue processes. Check the DBA_QUEUE_SCHEDULES view if propagation is occurring. As queue owner, check the database link. Check the job queue process activity. Check for messages in the source queue. Check for messages in the destination queue. continued… Debugging Oracle Streams AQ Propagation Problems This workflow assumes that you have queue tables and queues in source and target databases and a database link for the destination database. 1. Look for an entry in the DBA_QUEUE_SCHEDULES view. The SCHEDULE_DISABLED column must be set to N, which means the schedule is enabled. 2. Look for a non-zero JOBNO entry in the AQ$_SCHEDULES table and confirm that this JOBNO has an entry in the JOB$ table. 3. To check whether propagation is occurring, monitor the DBA_QUEUE_SCHEDULES view for TOTAL_NUMBER of propagated messages. Check this view for errors, if propagation is not occurring. Check the NEXT_RUN_DATE and NEXT_RUN_TIME columns in the DBA_QUEUE_SCHEDULES view to see whether propagation is scheduled for a later time. 4. As queue owner, check whether the database link to the destination database is correctly set up: select count(*) from 5. Make sure that at least two job queue processes are running. 6. To check source queue messages: select count (*) from AQ$<source_queue_table> where q_name = 'source_queue_name'; 7. To check destination queue messages: select count (*) from AQ$<destination_queue_table> where q_name = 'destination_queue_name'; Oracle Database 11g: Implement Streams

739 Debugging Oracle Streams AQ Propagation Problems
8. Check resource usage of other jobs in the DBA_SCHEDULER_RUNNING_JOBS view. 9. Check the existence of the related queue table and queue in the DBA_QUEUE_TABLES and DBA_QUEUES views. 10. Check that the consumer attempting to dequeue a message is a recipient of the propagated messages. Check for deprecated 8.0-style queues. Check 8.1-style queues. 11. Enable propagation tracing at the highest level. Debugging Oracle Streams AQ Propagation Problems (continued) 8. Look for other users who are using job queue processes by querying the DBA_SCHEDULER_RUNNING_JOBS view. Their resource usage may be “starving” the propagation jobs. 9. Look for the queue table’s SYS.AQ$_PROP_TABLE_INSTNO entry in the DBA_QUEUE_TABLES and DBA_QUEUES views. It must be enabled for enqueue and dequeue. 10. To check deprecated 8.0-style queues: SELECT H.CONSUMER, H.TRANSACTION_ID, H.DEQ_TIME, H.DEQ_USER, H.PROPAGATED_MSGID FROM AQ$<DESTINATION_QUEUE_TABLE> T, TABLE(T.HISTORY) H WHERE T.Q_NAME = 'QUEUE_NAME'; To check 8.1-style queues: SELECT CONSUMER_NAME, DEQ_TXN_ID, DEQ_TIME, DEQ_USER_ID, PROPAGATED_MSGID FROM AQ$<DESTINATION_QUEUE_TABLE> WHERE QUEUE = 'QUEUE_NAME'; 11. Enable propagation tracing at the highest level by using event 24040, level 10. Then debugging information is logged to job queue trace files, which you can check for error and for statements. Oracle Database 11g: Implement Streams

740 Troubleshooting AQ Errors
Troubleshooting and correcting common Streams AQ errors: ORA-1555: snapshot too old: rollback segment number %s with name \"%s\" too small (when dequeuing with the NEXT_MESSAGE navigation option) Suggested action: Use the FIRST_MESSAGE option once every 1,000 messages (executes the cursor again). Troubleshooting AQ Errors ORA-1555 error You may get this error when you dequeue with the NEXT_MESSAGE navigation option. NEXT_MESSAGE uses the snapshot created during the first dequeue call. After that, undo information may not be retained. Suggested action: Use the FIRST_MESSAGE option again to reexecute the cursor and get a new snapshot. Because the FIRST_MESSAGE option does not perform as well as the NEXT_MESSAGE option, it is recommended that you dequeue messages in batches. For example, FIRST_MESSAGE for one, NEXT_MESSAGE for the next 1,000 messages, then FIRST_MESSAGE again, and so on. Oracle Database 11g: Implement Streams

741 Troubleshooting AQ Errors
Troubleshooting and correcting common Streams AQ errors: ORA-25237: navigation option used out of sequence Suggested action: Reset the dequeuing position by using the FIRST_MESSAGE navigation option and then specify the NEXT_MESSAGE or NEXT_TRANSACTION option. ORA-25307: Enqueue rate too high, flow control enabled Suggested action: Try enqueue after waiting for some time. Troubleshooting AQ Errors (continued) ORA error The NEXT_MESSAGE or NEXT_TRANSACTION option is specified after dequeuing all messages. You must reset the dequeue position by using the FIRST_MESSAGE option, if you want to continue dequeuing between services (such as xa_start and xa_end boundaries). This is because XA cancels the cursor fetch state after an xa_end. If you do not reset, you get an error message stating that the navigation is used out of sequence. Suggested action: Reset the dequeuing position by using the FIRST_MESSAGE navigation option and then specify the NEXT_MESSAGE or NEXT_TRANSACTION option. ORA error Flow control has been enabled for the message sender. This means that the fastest subscriber of the sender’s message is not able to keep pace with the rate at which messages are enqueued. The buffered messaging application must handle this error and attempt again to enqueue messages after waiting for some time. Suggested action: Try enqueue after waiting for some time. Oracle Database 11g: Implement Streams

742 Troubleshooting Secure Queue Access
Streams queues are secure queues. Security must be configured properly for users to be able to perform operations on them. Common security errors for a Streams queue: ORA-24093: AQ agent <name> not granted privileges of database user <user> ORA-24033: no recipients for message ORA-25224: sender name must be specified for enqueue into secure queues Troubleshooting Secure Queue Access For details about the error message listed in the slide, see Appendix D titled “Common Streams Error Messages” and the Streams documentation. Oracle Database 11g: Implement Streams

743 Oracle Database 11g: Implement Streams 18 - 743
Summary In this lesson, you should have learned how to: Configure secure queue users for enqueuing and dequeuing messages Manually create and enqueue messages Configure a messaging client for dequeuing messages Explicitly dequeue messages Configure an apply process to dequeue user-created LCR messages Configure a message handler to dequeue non-LCR messages Configure message notification Describe and resolve common problems for Advanced Queuing Troubleshoot secure queue access Oracle Database 11g: Implement Streams

744 Oracle Database 11g: Implement Streams 18 - 744
Practice 23-1 Overview: Enqueuing and Dequeuing User-Created LCR Messages This practice covers the following topics: Configuring the OE user to enqueue messages into a SYS.AnyData queue Creating an LCR message manually Creating rules for the apply process pertaining to the user-created LCR message Creating an apply process to handle user-enqueued messages Explicitly enqueuing the message and verifying that the apply process applied the change to the shared table Oracle Database 11g: Implement Streams

745 Practice 23-2 Overview: Enqueuing and Dequeuing Non-LCR Messages
This practice covers the following topics: Creating a messaging client Creating and implementing a message handler Creating rules for non-LCR messages and adding these rules to the propagation and apply rule sets Explicitly enqueuing a non-LCR message Using the messaging client to dequeue the message Querying queue statistics Oracle Database 11g: Implement Streams

746 Practice 23 Result: Enqueuing and Dequeuing User-Created LCRs
AMER database EURO database OE schema OE APPLY_OE_RSET Rule set For message handling: OE.MESSAGES table OE.TEXT_MES_HANDLER procedure Enqueue DML ENQ_ROW_LCR procedure Q_CAPTURE Q_APPLY STRM01_CAP STRM01_PROPAGATION STRM01_APPLY APPLY_USER_EVENTS Oracle Database 11g: Implement Streams

747 Rules

748 Oracle Database 11g: Implement Streams 18 - 748
Objectives After completing this lesson, you should be able to: List the rule components Create a rule and rule set Create an evaluation context Perform basic rule administration Query the data dictionary for rule information Oracle Database 11g: Implement Streams

749 Oracle Database 11g: Implement Streams 18 - 749
Rules and Rule Sets A rule is a database object that a client uses to determine when to perform an action based on the occurrence of a message. A rule must be in a rule set for it to be evaluated. A rule set is a group of related rules. Multiple rules in a rule set are combined with the OR operator. Each rule set can be used by multiple processes or applications within the same database. A single rule can be in one rule set, multiple rule sets, or no rule sets. Rules and Rule Sets A rule is a database object that enables a client to perform an action when a message occurs and a condition is satisfied. Both user-created applications and Oracle features (such as Streams) can use rules. You can group related rules into rule sets. A single rule can be in one rule set, multiple rule sets, or no rule sets. Oracle Database 11g: Implement Streams

750 Oracle Database 11g: Implement Streams 18 - 750
Rule Components A rule consists of the following components: Rule condition Rule evaluation context (optional) Rule action context (optional) The following components are used during rule evaluation: Rule set Rules engine Rule Components When using Oracle Streams, you can control which information to share and where to share it by using rules. Each rule is specified as a condition that is similar to the condition in the WHERE clause of a SQL query. Rules are evaluated by a rules engine, which is a built-in part of the Oracle Database. Streams interacts with the rules engine through various APIs, such as DBMS_STREAMS_ADM and DBMS_CAPTURE_ADM. User applications can interact with the rules engine through the DBMS_RULE and DBMS_RULE_ADM APIs. This lesson explores the concepts of rules, rule sets, and evaluation contexts. Oracle Database 11g: Implement Streams

751 Oracle Database 11g: Implement Streams 18 - 751
Rule Condition A rule condition combines one or more expressions and operators, and returns a Boolean value (TRUE, FALSE, or NULL). Examples: The rule condition determines whether a rule is simple or complex. tab.department_id = 30 OR tab.job_id = 'PR_REP' is_manager(:employee_id) = 'Y' Rule Condition A rule condition combines one or more expressions and operators, and returns a Boolean value, which is a value of TRUE, FALSE, or NULL (unknown). An expression is a combination of one or more values, operators, and SQL functions that evaluate to a value. For example, the following condition consists of two values (one of which is a constant) and a condition (=): tab.department_id = 30 This condition evaluates to TRUE when the value of department_id is 30. A simple rule uses simple expressions. A simple expression uses one of the following conditions: =, <, >, <=, >=, !=, IS NULL, and IS NOT NULL. A single rule condition may include multiple expressions combined with the AND, OR, and NOT logical operators. You can combine two or more simple expressions in a rule condition with the AND or OR logical operator; the rule remains simple. You must use simple rules wherever possible because they: Are indexed by the rules engine internally Can be evaluated without executing SQL Can be evaluated with partial data Oracle Database 11g: Implement Streams

752 Oracle Database 11g: Implement Streams 18 - 752
Rule Condition (continued) Rule conditions may contain variables; each variable must be prefixed with a colon (:). The following is an example of a variable that is used in a rule condition: :x = 55 By using variables, you can refer to data that is not stored in a table. A variable may also improve performance by replacing a commonly occurring expression. Performance may improve because, instead of evaluating the same expression multiple times, the variable is evaluated once. A rule condition may also contain an evaluation of a call to a subprogram, such as a function. These conditions are evaluated in the same way as other conditions. That is, they evaluate to a value of TRUE, FALSE, or unknown. The following is an example of a rule condition that contains a call to a simple function named IS_MANAGER that determines whether an employee is a manager, based on the supplied employee ID number: is_manager(EMPLOYEE_ID) = 'Y' In this example, the value of the EMPLOYEE_ID column is passed to the IS_MANAGER function, which then returns a Y character if the employee represented by EMPLOYEE_ID is a manager. Oracle Database 11g: Implement Streams

753 Rule Evaluation Context
A rule evaluation context: Is a database object Provides information necessary to interpret rule conditions that reference external data Rule evaluation contexts provide information for: Variables that are used in the rule condition Table aliases for references to database objects in the rule condition dep.location_id IN (:loc_id1, :loc_id2) Rule Evaluation Context If you use variables or refer to table data in your rule condition, you must use a rule evaluation context. A rule evaluation context is a database object that provides the necessary information for the interpretation and evaluation of rule conditions that reference external data. For example, if a rule refers to an external variable, the information in the rule evaluation context may contain the type of the variable. If a rule refers to a table alias, the information in the evaluation context may contain the name and schema of the table that corresponds to the table alias. The objects that are referenced by a rule are determined by its rule evaluation context. The rule owner must have the necessary privileges to access these objects, such as the SELECT privilege on tables, the EXECUTE privilege on types, and so on. The rule condition is resolved in the schema that owns the evaluation context. For example, consider a rule evaluation context named HR_EVALCTX that contains the following information: Table alias DEP corresponds to the HR.DEPARTMENTS table. The LOC_ID1 and LOC_ID2 variables are both of the NUMBER type. Oracle Database 11g: Implement Streams

754 Oracle Database 11g: Implement Streams 18 - 754
Rule Evaluation Context (continued) The HR_EVALCTX rule evaluation context provides the necessary information to evaluate the following rule condition: dep.location_id IN (:loc_id1, :loc_id2) In this case, the rule condition evaluates to TRUE for a row in the HR.DEPARTMENTS table if that row has a value in the LOCATION_ID column that corresponds to either of the values that are passed in by the LOC_ID1 or LOC_ID2 variable. The rule cannot be interpreted or evaluated properly without the information in the HR_REVC rule evaluation context. It also cannot be interpreted or properly evaluated if the rule owner does not have the SELECT privilege on the HR.DEPARTMENTS table. Oracle Database 11g: Implement Streams

755 Using Rule Evaluation Contexts
An evaluation context can be specified: For a rule set For a rule When a rule is added to a rule set If no rule evaluation context is found for a rule, the rule evaluation context of the associated rule set is used (if one exists). Using Rule Evaluation Contexts The following list describes which evaluation context is used when a rule is evaluated: If an evaluation context is associated with a rule, it is used for the rule whenever the rule is evaluated and any evaluation context associated with the rule set being evaluated is ignored. If a rule does not have an evaluation context, but an evaluation context was specified for the rule when it was added to a rule set by using the ADD_RULE procedure in the DBMS_RULE_ADM package, the evaluation context specified in the ADD_RULE procedure is used for the rule when the rule set is evaluated. If no rule evaluation context is associated with a rule and none was specified by the ADD_RULE procedure, the evaluation context of the rule set, if one exists, is used for the rule when the rule set is evaluated. Oracle Database 11g: Implement Streams

756 Oracle Database 11g: Implement Streams 18 - 756
Rule Action Context A rule action context provides a context for the action taken by a client of the rules engine when a rule evaluates to TRUE. Each action context can contain zero or more name-value pairs: Each name in the list must be unique. Name-value pairs provide support for Streams functionality, such as rule-based transformations. Rule Action Context A rule action context contains optional information that is interpreted by the client of the rules engine when the rule is evaluated for a message. The client of the rules engine can be a user-created application or an internal feature of the Oracle server, such as Streams. The information in an action context is stored as an array of name-value pairs. The rule action context information provides a context for the action taken by a client of the rules engine when a rule evaluates to TRUE. The rules engine does not interpret the action context. Instead, it returns the action context information when a rule evaluates to TRUE. Then, a client of the rules engine can interpret the action context information. Each action context can contain zero or more name-value pairs. If an action context contains multiple name-value pairs, each name in the list must be unique. Streams uses action contexts for rule-based transformations. Oracle Database 11g: Implement Streams

757 Rule Action Context: Example
Rule Name rule_dep_10 rule_dep_20 rule_dep_30 Name-Value Pair course_number, 1057 course_number, 1215 NULL Rule Condition department_id = 10 department_id = 20 department_id = 30 Rule Action Context: Example Suppose a message event is defined as the addition of a new employee to a company. If the employee information is stored in the HR.EMPLOYEES table, the message event occurs whenever a row is inserted into this table. The company wants to specify that a number of actions are taken when a new employee is added, but the actions depend on which department the employee joins. One of these actions is that the employee is registered for a course that relates to the department. In this scenario, the company can create a rule for each department with an appropriate action context. Here, an action context information that is returned when a rule evaluates to TRUE specifies the number of a course that an employee should take. These action context name-value pairs are used in the client application as follows: The action context for the rule_dep_10 rule instructs the client application to enroll the new employee in course number 1057. The action context for the rule_dep_20 rule instructs the client application to enroll the new employee in course number 1215. The NULL action context for the rule_dep_30 rule instructs the client application not to enroll the new employee in any course. Oracle Database 11g: Implement Streams

758 Oracle Database 11g: Implement Streams 18 - 758
Rule Action Context: Example (continued) In this example, the client application to which the rules engine returns the action context information registers the new employee in the course with the returned course number. The client application does not register the employee for a course if a NULL action context is returned or if the action context does not contain a course number. If multiple clients use the same rule or if you want an action context to return multiple name-value pairs, you can list multiple name-value pairs in an action context. Suppose the company also adds an employee to a department electronic mailing list. In this case, the action context for the rule_dep_10 rule may contain two name-value pairs: Course_number : 1057 Dist_list : admin_list Oracle Database 11g: Implement Streams

759 Rule Evaluation with the Rules Engine
Event 2 Rules engine Rules and evaluation contexts 3 1 Rules Engine Client 4 TRUE, FALSE, or unknown 5 6 Optional action context Action Rule Evaluation with the Rules Engine The rules engine evaluates rule sets based on message events. A message event is an occurrence that is defined by the client of the rules engine. The client could be an application or a Streams process, such as capture or apply. 1. A client-defined message event occurs. 2. When a message event occurs, the client calls the DBMS_RULE.EVALUATION procedure to evaluate the event. The client also sends the name of the rule set whose rules should be used to evaluate the message. The client may also send: An evaluation context Table values and variable values. The table values contain ROWIDs that refer to the data in table rows, and the variable values contain the data for explicit variables. A message context. A message context is a varray of the SYS.RE$NV_LIST type that contains name-value pairs that identify the message. This optional information is not directly used or interpreted by the rules engine. Instead, it is passed to client callbacks, such as an evaluation function, a variable value evaluation function (for implicit variables), and a variable method function. Other information about the message and about how to evaluate the message with the DBMS_RULE.EVALUATE procedure Oracle Database 11g: Implement Streams

760 Oracle Database 11g: Implement Streams 18 - 760
Rule Evaluation with the Rules Engine (continued) 3. The rules engine uses the rules in the specified rule set and the relevant evaluation contexts to evaluate the message. You can specify an evaluation context when you run the DBMS_RULE.EVALUATION procedure. Only rules that use the specified evaluation context are evaluated. For Streams, you can use a default evaluation context. 4. The rules engine obtains the results of the evaluation. Each rule evaluates to TRUE, FALSE, or NULL (unknown). 5. The rules engine returns rules that evaluated to TRUE to the client. Each returned rule is returned with its entire action context, which may contain information or may be NULL. 6. The client performs actions based on the results that are returned by the rules engine. The rules engine does not perform action-based rule evaluations. Oracle Database 11g: Implement Streams

761 Manually Evaluating Rules
DBMS_RULE.EVALUATE( rule_set_name IN VARCHAR2, evaluation_context IN VARCHAR2, event_context IN SYS.RE$NV_LIST DEFAULT NULL, table_values IN SYS.RE$TABLE_VALUE_LIST DEFAULT NULL, column_values IN SYS.RE$COLUMN_VALUE_LIST, variable_values IN SYS.RE$VARIABLE_VALUE_LIST attribute_values IN SYS.RE$ATTRIBUTE_VALUE_LIST, stop_on_first_hit IN BOOLEAN DEFAULT FALSE, simple_rules_only IN BOOLEAN DEFAULT FALSE, true_rules OUT SYS.RE$RULE_HIT_LIST, maybe_rules OUT SYS.RE$RULE_HIT_LIST); Manually Evaluating Rules The EVALUATE procedure is the only procedure in the DBMS_RULE package. Execute privileges for this package have been granted to PUBLIC, but the user must have execute privileges on the rules that are being evaluated. The rules in the rule set are evaluated using the data specified for table_values, column_values, variable_values, and attribute_values. These values must refer to tables and variables in the specified evaluation context. The rules engine returns the results to the client. The rules engine returns rules by using the two OUT parameters: true_rules: Returns rules that evaluate to TRUE maybe_rules (optional): Returns rules that may evaluate to TRUE when given more information The caller may also specify: stop_on_first_hit, if evaluation must stop as soon as the first TRUE rule or the first MAYBE rule (if there are no TRUE rules) is found simple_rules_only, if only rules that are simple enough to be evaluated fast (without SQL) should be considered for evaluation. This makes evaluation faster but causes rules that cannot be evaluated without SQL to be returned as MAYBE rules. Oracle Database 11g: Implement Streams

762 Rule Component: Overview
Capture Apply Evaluation context Rule set A Rule set B Rule1 Rule2 Rule3 Rule4 Rule5 Evaluation context Action context Evaluation context Evaluation context Action context Action context Rule Component: Overview The diagram shows some of the rule components and the ways in which they can be used and grouped. You can specify an evaluation context: For a rule set For a rule A rule action context is associated with a rule. A rule must be in a rule set for it to be evaluated. In the diagram in the slide, Rule5 will not be evaluated. A single rule can be in one rule set, multiple rule sets, or no rule sets. For example, Rule2 is used by two rule sets. A rule set can be used by multiple processes or applications within the same database. In the diagram, the capture and apply processes use the same rule set. Oracle Database 11g: Implement Streams

763 Creating Rules and Rule Sets
Process for manually creating rules and rule sets: Create an evaluation context. (Optional) Create an action context. (Optional) Create a rule set. Create a rule. Assign the rule to the rule set. Associate the rule set with capture, apply, or propagation (if you are using Streams). A rule must be part of a rule set to be used by apply or capture, or to be used during propagation. Creating Rule and Rule Sets The code examples in the following slides demonstrate how you could use rules as part of an application that manages security badge access for a building. Oracle Database 11g: Implement Streams

764 Creating a Rule Evaluation Context
DECLARE tab SYS.RE$TABLE_ALIAS_LIST; var SYS.RE$VARIABLE_TYPE_LIST; BEGIN tab := SYS.RE$TABLE_ALIAS_LIST( SYS.RE$TABLE_ALIAS('dep', 'HR.DEPARTMENTS')); var := SYS.RE$VARIABLE_TYPE_LIST( SYS.RE$VARIABLE_TYPE( 'DEPT_ID', 'NUMBER', NULL, NULL)); DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT( evaluation_context_name => 'ACCESS_EVALCTX', table_aliases => tab, variable_types => var, evaluation_context_comment=>'BADGE_APP EvalCtx'); END; / Creating a Rule Evaluation Context In the security application, users are granted access to various locations based on their current employment status and department assignment, as listed in the HR.DEPARTMENTS table. To access this table within a rule condition, you must create an evaluation context. The example in the slide shows how to create the evaluation context access_evalctx. This evaluation context supplies a table alias for the HR.DEPARTMENTS table and the details for the variable called DEPT_ID, which is of the NUMBER type. When creating the evaluation context, you can supply the evaluation context name, the list of table aliases (which is of the SYS.RE$TABLE_ALIAS_LIST type ), and the list of variable types (which is of the SYS.RE$VARIABLE_TYPE_LIST type ). You can also optionally supply a comment for the evaluation context. To run this procedure, a user must meet at least one of the following requirements: The user must be the owner of the evaluation context that is being created and have the CREATE_EVALUATION_CONTEXT_OBJ system privilege. The user must have the CREATE_ANY_EVALUATION_CONTEXT system privilege. Oracle Database 11g: Implement Streams

765 Creating a Rule Action Context
DECLARE ac SYS.RE$NV_LIST; BEGIN ac := SYS.RE$NV_LIST(NULL); ac.ADD_PAIR('Level_2', SYS.AnyData.CONVERTVARCHAR2('Denied')); ac.ADD_PAIR('Level_4', SYS.AnyData.CONVERTVARCHAR2('Cleared')); ac.ADD_PAIR('Level_6', ... Creating a Rule Action Context The slide provides an example of specifying an action context for a rule. In this example, the action context has three name-value pairs: ('Level_2', 'Denied') ('Level_4', 'Cleared') ('Level_6', 'Cleared') If the rule evaluates to TRUE, the information about which levels are granted access and which levels are denied access is returned to the application. In this example, the levels may refer to floors in the building, security clearance levels, or the level associated with a job position. The action context may be associated with a rule that evaluates to TRUE depending upon an employee’s ID number, department, or job ranking. There is no action performed by an action context, but the security application can use the returned information to determine its next course of action. Suppose that the security application is used to create security badges. The action context shown in the slide may indicate the floors to which a user should have access, based on the department to which the employee is assigned. Individual rules would exist for each department with an action context customized for each department. Oracle Database 11g: Implement Streams

766 Oracle Database 11g: Implement Streams 18 - 766
Creating a Rule DBMS_RULE_ADM.CREATE_RULE( rule_name => 'IT_Dept', condition => ':dept_id = 60', action_context => ac); END; / Creating a Rule At this point, you have configured the following components: An evaluation context to inform the rules engine of the data to use for evaluating rules An action context to instruct the rules engine on what information to return when a rule evaluates to TRUE Now you create the rules. For this application, you create simple rules that evaluate to TRUE for each department located in the HR.DEPARTMENTS table. Running the procedure shown in the slide performs the following actions: Creates a rule named IT_Dept in the current schema. However, a rule with the same name and owner must not exist. Creates a condition that evaluates to TRUE whenever the department ID is 60. You must define the dept_id variable in a rule evaluation context for this rule or for the rule set in which this rule is placed. Specifies that the rule action context must be returned when this rule evaluates to TRUE In this example, no evaluation context is specified for the rule. Therefore, the rule either inherits the evaluation context of any rule set to which it is added, or is assigned an evaluation context explicitly when the DBMS_RULE_ADM.ADD_RULE procedure is run to add it to a rule set. At this point, the rule cannot be evaluated because it is not part of any rule set. Oracle Database 11g: Implement Streams

767 Oracle Database 11g: Implement Streams 18 - 767
Creating a Rule Set BEGIN DBMS_RULE_ADM.CREATE_RULE_SET( rule_set_name => 'access_levels', evaluation_context => 'access_evalctx', rule_set_comment => 'Rules used to determine floor access'); END; / Creating a Rule Set Rules are not evaluated unless they are part of a rule set, so now you need to create a rule set to be used by the rules engine. The procedure shown in the slide performs the following actions: Creates a rule set named ACCESS_LEVELS in the current schema. However, a rule set with the same name and owner must not exist. Associates the rule set with the access_evalctx evaluation context, which was created earlier Associates a comment with this rule set Oracle Database 11g: Implement Streams

768 Assigning a Rule to a Rule Set
Use DBMS_RULE_ADM.ADD_RULE: BEGIN DBMS_RULE_ADM.ADD_RULE( rule_name => 'IT_Dept', rule_set_name => 'access_levels'); END; / Assigning a Rule to a Rule Set After the rule set has been created, you can assign the rules used by the security application to this rule set. In this example, no evaluation context is specified when running the ADD_RULE procedure. Therefore, if the rule does not have its own evaluation context, it inherits the evaluation context of the ACCESS_LEVELS rule set. If you want a rule to use an evaluation context other than the one specified for the rule set, you can set the EVALUATION_CONTEXT parameter to the particular evaluation context when you run the ADD_RULE procedure. Oracle Database 11g: Implement Streams

769 Rule and Rule Set Privileges
There are two types of privileges: System privileges: Enable a user to create, alter, execute, or drop rule objects in the user’s own schema Enable a user to create, alter, execute, or drop rule objects in any schema if the ANY keyword is used Can be granted transitively if given the GRANT option Object privileges: Enable a user to alter or execute the specified rule object Rule and Rule Set Privileges Rule components are database objects, just like views, tables, and procedures. If the security application uses schemas or database users other than the schema in which the rule components were created, you must grant access to the rule components to all database users who need to perform rule evaluations. You can use the GRANT_OBJECT_PRIVILEGE procedure in the DBMS_RULE_ADM package to grant object privileges on a specific evaluation context, rule set, or rule. By using these privileges, a user can alter or execute the specified object. You can use the GRANT_SYSTEM_PRIVILEGE procedure in the DBMS_RULE_ADM package to grant system privileges on evaluation contexts, rule sets, and rules, to users and roles. By using these privileges, a user can create, alter, execute, and drop these objects in the user’s own schema or in any schema (if the ANY version of the privilege is granted, such as EXECUTE_ANY_RULE). You can revoke system privileges by calling the REVOKE_SYSTEM_PRIVILEGE procedure in the DBMS_RULE_ADM package. Object privileges can be revoked by calling the REVOKE_OBJECT_PRIVILEGE procedure in the DBMS_RULE_ADM package. Oracle Database 11g: Implement Streams

770 Granting Rule System Privileges
Example: Grant the BADGE_APPL user the privilege to create an evaluation context in the user’s own schema. BEGIN DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE( privilege => DBMS_RULE_ADM.CREATE_ANY_RULE, grantee => 'badge_appl', grant_option => FALSE); END; / Granting Rule System Privileges In this example, the grant_option parameter in the GRANT_SYSTEM_PRIVILEGE procedure is set to FALSE, which is the default setting. Therefore, the BADGE_APPL user cannot grant the CREATE_ANY_RULE system privilege to other users or roles. If the grant_option parameter is set to TRUE, the BADGE_APPL user can grant this system privilege to other users. Oracle Database 11g: Implement Streams

771 Granting Rule Object Privileges
Example: Grant the HR user the privilege to both alter and execute a rule set named ACCESS_LEVELS in the BADGE_APPL schema. BEGIN DBMS_RULE_ADM.GRANT_OBJECT_PRIVILEGE( privilege => SYS.DBMS_RULE_ADM.ALL_ON_RULE_SET, object_name => 'badge_appl.access_levels', grantee => 'hr', grant_option => FALSE); END; / Granting Rule Object Privileges In this example, the grant_option parameter in the GRANT_OBJECT_PRIVILEGE procedure is set to FALSE, which is the default setting. Therefore, the HR user cannot grant the ALL_ON_RULE_SET object privilege for the specified rule set to other users or roles. If the grant_option parameter is set to TRUE, the HR user can grant this object privilege to other users. Oracle Database 11g: Implement Streams

772 Rule Evaluation Context Privileges
To use an evaluation context, a user must meet at least one of the following conditions for the evaluation context: Own the evaluation context Have the EXECUTE_ON_EVALUATION_CONTEXT privilege for the evaluation context Have the EXECUTE_ANY_EVALUATION_CONTEXT system privilege EXEC DBMS_RULE_ADM.GRANT_OBJECT_PRIVILEGE( - privilege=>SYS.DBMS_RULE_ADM.EXECUTE_ON_EVALUATION_CONTEXT,- object_name => 'BADGE_APPL.ACCESS_EVALCTX',- grantee => 'HR'); Oracle Database 11g: Implement Streams

773 Managing Rules and Rule Sets
By using the DBMS_RULE_ADM package, you can: Remove a rule from a rule set Drop a rule from the database Drop a rule set from the database Alter a rule: Change a rule condition Change or remove the rule evaluation context Change or remove the rule’s action context Change or remove the comment for a rule Managing Rules and Rule Sets The REMOVE_RULE procedure removes the specified rule from the specified rule set. If you run the REMOVE_RULE procedure for a rule that is being used by multiple rule sets, the rule is removed only from the specified rule set whereas it remains in the other rule sets. The REMOVE_RULE procedure does not drop the rule from the database. The DROP_RULE procedure drops a rule from the database. The DROP_RULE procedure has a parameter called FORCE that defaults to FALSE. This means that the rule cannot be dropped if it is in one or more rule sets. If FORCE is set to TRUE, the rule is dropped from the database and automatically removed from any rule sets that contain the rule that is being dropped. The rule evaluation context that is associated with the rule, if any, is not dropped when you run this procedure. The DROP_RULE_SET procedure drops the rule set with the specified name from the database. The DROP_RULE_SET procedure has a parameter called delete_rules that defaults to FALSE. If the rule set contains any rules, these rules are not dropped. If the delete_rules parameter is set to TRUE, any rules in the rule set that are not in another rule set are automatically dropped from the database. If some of the rules in the rule set are in one or more other rule sets, these rules are not dropped. Oracle Database 11g: Implement Streams

774 Oracle Database 11g: Implement Streams 18 - 774
Altering a Rule BEGIN DBMS_RULE_ADM.ALTER_RULE( rule_name => 'BADGE_APPL.IT_Dept', condition => ':dept_id = 99', evaluation_context => NULL); END; / Altering a Rule You can use the ALTER_RULE procedure of DBMS_RULE_ADM to change one or more aspects of the specified rule. Suppose that you want to change the condition of the rule that was created in the previous example, “Creating a Rule.” The condition in the existing IT_Dept rule evaluates to TRUE when the department ID is 60. If you want to change the rule so that it evaluates to TRUE when the department ID is 99, use the ALTER_RULE procedure. The procedure in the slide alters the rule in this way. Note: Changing the condition of a rule affects all rule sets that contain that rule. Oracle Database 11g: Implement Streams

775 Monitoring Rule Components
DBA_EVALUATION_CONTEXTS DBA_EVALUATION_CONTEXT_TABLES DBA_EVALUATION_CONTEXT_VARS DBA_RULES DBA_RULE_SETS DBA_RULE_SET_RULES Monitoring Rule Components You can use the following data dictionary views to view rule information: DBA_EVALUATION_CONTEXTS describes all rule evaluation contexts in the database. DBA_EVALUATION_CONTEXT_TABLES describes all tables in all rule evaluation contexts in the database. DBA_EVALUATION_CONTEXT_VARS describes all variables in all rule evaluation contexts in the database. DBA_RULES describes all rules in the database. DBA_RULE_SETS describes all rule sets in the database. DBA_RULE_SET_RULES describes all rules in all rule sets in the database. Oracle Database 11g: Implement Streams

776 Dynamic Views for Monitoring Rules
V$RULE V$RULE_SET V$RULE_SET_AGGREGATE_STATS Dynamic Views for Monitoring Rules You can query the V$RULE dynamic performance view to display evaluation statistics for a particular rule since the database instance last started. You can obtain information, such as: The total number of times the rule evaluated to TRUE since the database was started The total number of times the rule evaluated to MAYBE since the database was started The total number of evaluations on the rule that issued SQL since the database was started Generally, issuing SQL to evaluate a rule is more expensive than evaluating the rule without issuing SQL. You can query the V$RULE_SET dynamic performance view to: Display general information about rule set evaluations and invalidations since the database was started Determine the resources usage of a rule set since the database was started. If a rule set was evaluated multiple times since the database was started, some statistics are cumulative, including the statistics for the amount of CPU time, evaluation time, and the shared memory bytes used. Oracle Database 11g: Implement Streams

777 Oracle Database 11g: Implement Streams 18 - 777
Dynamic Views for Monitoring Rules (continued) You can query the V$RULE_SET_AGGREGATE_STATS dynamic performance view to display statistics for all rule set evaluations since the database was started. Some of the statistics measured in this view are: rule set evaluations (all) rule set evaluations (simple_rules_only) rule set evaluations (SQL free) rule set SQL executions rule set conditions processed rule set evaluation time (CPU) rule set evaluation time (elapsed) rule set user function calls (evaluation function) rule set user function calls (variable value function) Oracle Database 11g: Implement Streams

778 Oracle Database 11g: Implement Streams 18 - 778
Summary In this lesson, you should have learned how to: List the rule components Create a rule and rule set Create an evaluation context Perform basic rule administration Query the data dictionary for rule information Oracle Database 11g: Implement Streams

779 Integrating with Oracle Streams

780 Oracle Database 11g: Implement Streams 18 - 780
Objectives After completing this lesson, you should be able to: Describe how Streams works in a heterogeneous environment Describe how Oracle Streams can be used to integrate data from other systems Oracle Database 11g: Implement Streams

781 Oracle-to-Oracle Data Sharing
When configuring Streams between two Oracle databases, you configure different components at each site. In a heterogeneous environment, configuration is slightly different. Oracle Oracle Capture Capture Propagation Apply Apply Oracle-to-Oracle Data Sharing Oracle Streams is an open information-sharing solution. Each element supports industry-standard languages and standards. Streams supports capture and apply from Oracle to non-Oracle systems. Changes can be applied to a non-Oracle system via a transparent gateway. Streams also includes an API to allow non-Oracle data sources to easily submit or receive change records, thereby enabling heterogeneous data movement in both directions. For example, a Sybase DBA could write some triggers to capture updates in a Sybase database, and then pass those changes to the Oracle system formatted as LCRs and submit them to the Stream. The graphic in the slide shows the normal configuration for data sharing with Oracle Streams between two Oracle databases. In the following slides, you examine different configurations for a heterogeneous environment. Oracle Database 11g: Implement Streams

782 Oracle to Non-Oracle Data Sharing
Gateway Oracle Non-Oracle database Capture Apply Heterogeneous Services Oracle to Non-Oracle Data Sharing If an Oracle database is the source and a non-Oracle database is the destination, the non-Oracle database destination lacks the following Streams mechanisms: A queue to receive events An apply process to dequeue and apply events To share data manipulation language (DML) changes from an Oracle source database to a non-Oracle destination database, the Oracle database functions as a proxy and carries out some steps that would normally be performed at the destination database. That is, the events intended for the non-Oracle destination database are dequeued in the Oracle database itself, and an apply process at the Oracle database uses Heterogeneous Services to apply the events to the non-Oracle database across a network connection through a gateway. The graphic in the slide illustrates an Oracle database sharing data with a non-Oracle database through an Oracle Transparent Gateway product. Oracle Database 11g: Implement Streams

783 Oracle to Non-Oracle Data Sharing
The capture process functions similarly. The apply process is configured at the Oracle database. No staging queue is created on the non-Oracle database. Only basic DML operations are supported. The captured DDL changes cannot be applied on the non-Oracle database. Error handlers and conflict handlers are not supported. Conflict detection is supported. Oracle to Non-Oracle Data Sharing (continued) In an Oracle to non-Oracle environment, the capture process functions in the same way that it would in an Oracle-only environment. That is, it finds changes in the redo log, captures them based on the capture process rules, and enqueues the captured changes as logical change records (LCRs) in a SYS.AnyData queue. In addition, a single capture process may capture the changes that will be applied at both the Oracle and non-Oracle databases. Similarly, the SYS.AnyData queue that stages the captured LCRs functions in the same way that it would in an Oracle-only environment. You can propagate LCRs to any number of intermediate queues in Oracle databases before they are applied to a non-Oracle database. An apply process running in an Oracle database uses a gateway to apply the changes that are encapsulated in LCRs directly to the database objects in a non-Oracle database. However, the LCRs are not propagated to a queue in the non-Oracle database, as they would be in an Oracle-only Streams environment. Instead, the apply process applies the changes directly through a database link to the non-Oracle database. Oracle Database 11g: Implement Streams

784 Oracle Database 11g: Implement Streams 18 - 784
Oracle to Non-Oracle Data Sharing (continued) When you specify that the data manipulation language (DML) changes made to certain tables should be applied at a non-Oracle database, an apply process can apply only the following types of DML changes: INSERT UPDATE DELETE The apply process cannot apply data definition language (DDL) changes at non-Oracle databases. Currently, error handlers and conflict handlers are not supported when sharing data from an Oracle database to a non-Oracle database. If an apply error occurs, the transaction containing the LCR that caused the error is moved into the error queue in the Oracle database. Oracle Database 11g: Implement Streams

785 Oracle to Non-Oracle Data Type Support
When applying changes to a non-Oracle database, an apply process applies the changes made to columns of only the following data types: CHAR, VARCHAR2 NCHAR, NVARCHAR2 NUMBER, DATE, RAW TIMESTAMP TIMESTAMP WITH TIME ZONE TIMESTAMP WITH LOCAL TIME ZONE INTERVAL YEAR TO MONTH INTERVAL DAY TO SECOND Oracle to Non-Oracle Data Type Support The apply process does not apply changes to columns of the following data types to non-Oracle databases: CLOB, NCLOB, BLOB, BFILE, LONG, LONG RAW, ROWID, UROWID, and user-defined types (including object types, REFs, VARRAYs, and nested tables). The apply process raises an error when an LCR contains an unsupported data type and the transaction containing the LCR that caused the error is moved to the error queue in the Oracle database. Each transparent gateway may have further limitations regarding the data types and general Streams support. For a data type to be supported in an Oracle to non-Oracle environment, the data type must be supported by both Streams and the gateway being used. Note Large object (LOB) data types are supported by Streams in an environment with all Oracle databases. Even though LOB updates are supported by transparent gateways, LOBs are not supported in a heterogeneous Streams environment. The apply process cannot apply DDL changes at non-Oracle databases. Refer to specific transparent gateway documentation for details about any further restrictions on the data type support. For example, you may want to refer to the Oracle Transparent Gateway for Sybase Administrator’s Guide for HP-UX. Oracle Database 11g: Implement Streams

786 Configuring an Apply Process for an Oracle to Non-Oracle Environment
Configure Oracle Net Services, the gateway, and any other required database links (using an explicit CONNECT TO clause). Create an apply process: Use the CREATE_APPLY procedure of DBMS_APPLY_ADM. Specify a database link for the apply_database_link parameter. Specify apply process rules. Configure the apply process. Instantiate the tables at the non-Oracle database. Start the Streams processes at the source site. Configuring an Apply Process for an Oracle to Non-Oracle Environment An apply process running in an Oracle database uses Heterogeneous Services and an Oracle Transparent Gateway to apply the changes that are encapsulated in LCRs directly to the database objects in a non-Oracle database. The LCRs are not propagated to a queue in the non-Oracle database, as they would be in an Oracle-only Streams environment. Instead, the apply process applies the changes directly through a database link to the non-Oracle database. When you create an apply process that will apply changes to a non-Oracle database, you must have previously configured Oracle Net Services, the gateway, and a database link, which is used by the apply process to apply the changes to the non-Oracle database. The database link must be created with an explicit CONNECT TO clause. 1. To configure Oracle Net Services, you must update the network connectivity information. To initiate a connection with the non-Oracle system, the Oracle Database server starts an agent process through the Oracle listener. For the Oracle Database server to be able to connect to the agent, you must perform the following steps: Set up an Oracle Net Services service name for the agent that can be used by the Oracle Database server by configuring the tnsnames.ora file, the Oracle Internet Directory server, or a third-party name server using the Oracle naming adapter. Oracle Database 11g: Implement Streams

787 Oracle Database 11g: Implement Streams 18 - 787
Configuring Apply Process for an Oracle to Non-Oracle Environment (continued) Set up the listener on the gateway to listen for incoming requests from the Oracle Database server and spawn Heterogeneous Services agents. Refer to the Oracle Database Heterogeneous Connectivity Administrator’s Guide for more details about configuring Oracle Net Services to support heterogeneous agents. You must configure the Oracle Transparent Gateway to use the COMMIT_CONFIRM transaction model. 2. When the database link is created and working properly, create the apply process by using the CREATE_APPLY procedure in the DBMS_APPLY_ADM package, and specify the database link for the apply_database_link parameter. By default, this parameter is set to NULL, indicating that the apply process applies messages at the local database. 3. After you manually create an apply process, you must configure the apply process rules to specify the changes that are to be applied at the non-Oracle database. These rules must be assigned to the rule sets used by the apply process. You can use the DBMS_STREAMS_ADM.ADD_TABLE_RULES procedure to easily configure the rules for the specified apply process. You can also specify an existing rule set when you create the apply process or after it is created with the DBMS_APPLY_ADM.ALTER_APPLY procedure. 4. Now that the apply process has been created, it may need additional configuration: You must set the parallelism apply process parameter to 1 when the apply process is applying changes to a non-Oracle database. Currently, parallel apply to non-Oracle databases is not supported. If you use substitute key columns for any of the tables at the non-Oracle database, specify the database link to the non-Oracle database when you run the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. If you use a DML handler or a message handler to process events for any of the tables at the non-Oracle database, specify the database link to the non-Oracle database when configuring the handler, as in the following example: BEGIN DBMS_APPLY_ADM.SET_DML_HANDLER( object_name => 'OE.ORDERS', object_type => 'TABLE', operation_name => 'INSERT', error_handler => false, user_procedure => 'oe.conv_order_totals', apply_database_link => 'NON_ORACLE_DB_LINK'); END; / The object name in the preceding example refers to the name of the table on the non-Oracle database. If the DML handler or message handler performs DML operations on tables other than those in the LCR itself, ensure that the database link is used within the procedural code when referring to non-Oracle tables. Oracle Database 11g: Implement Streams

788 Instantiating Tables in an Oracle to Non-Oracle Environment
To instantiate tables at the non-Oracle site, perform the following steps: Create tables at the non-Oracle system (if they do not exist). Prepare the tables for instantiation at the Oracle database. Populate the non-Oracle table with data from the Oracle database that is consistent to a single SCN. Set the instantiation SCN for the table at the Oracle database. Instantiating Tables in an Oracle to Non-Oracle Environment Before you start an apply process that applies DML changes to a non-Oracle database, complete the following steps to instantiate each table at the non-Oracle database: 1. Use the DBMS_HS_PASSTHROUGH package or the tools supplied with the non-Oracle database to create the table at the non-Oracle database. 2. If a capture process captures the changes that are shared between the Oracle and non-Oracle database, prepare all tables that will share data for instantiation at the Oracle database. 3. Populate the table at the non-Oracle system. Write a PL/SQL block (or a C program) that performs the following steps: Gets the current system change number (SCN) by using the GET_SYSTEM_CHANGE_NUMBER function in the DBMS_FLASHBACK package Invokes the ENABLE_AT_SYSTEM_CHANGE_NUMBER procedure in the DBMS_FLASHBACK package to set the current session to the obtained SCN. This action ensures that all fetches are done using the same SCN. Populates the table at the non-Oracle site by fetching data row by row from the table at the Oracle database, and then inserting the data row by row into the table at the non-Oracle database. All fetches should be done at the SCN that is obtained using the GET_SYSTEM_CHANGE_NUMBER function. Oracle Database 11g: Implement Streams

789 Oracle Database 11g: Implement Streams 18 - 789
Instantiating Tables in an Oracle to Non-Oracle Environment (continued) Refer to the Oracle Streams Replication Administrator’s Guide for an example of such a procedure. 4. Use the SET_TABLE_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package to set the instantiation SCN for the table at the non-Oracle database. Specify the SCN that you obtained in step 3. Ensure that you set the apply_database_link parameter to the database link for the non-Oracle database when you execute the SET_TABLE_INSTANTIATION_SCN procedure. Oracle Database 11g: Implement Streams

790 Non-Oracle to Oracle Data Sharing
The user application is responsible for assembling changes at the non-Oracle database as LCRs and enqueuing them into the Oracle database. The apply process remains the same. You can configure: Rule-based transformations during apply Conflict detection and resolution Oracle Non-Oracle database Apply Enqueue User application Get changes Non-Oracle to Oracle Data Sharing To capture and propagate changes from a non-Oracle database to an Oracle database, a custom application is required. This application gets the changes made to the non-Oracle database by reading from the transaction logs, by using triggers, or by some other method. The application must assemble and order the transactions, and must convert each change into an LCR. Then, the application must enqueue the LCRs into a queue in an Oracle database by using the DBMS_STREAMS_MESSAGING or DBMS_AQ package. The application must commit after enqueuing all LCRs in each transaction. The graphic in the slide shows a non-Oracle database sharing data with an Oracle database. Oracle Database 11g: Implement Streams

791 Non-Oracle to Oracle Streams Configuration
Create a Streams queue in the Oracle database. Create an apply process with the apply_captured parameter set to FALSE in the Oracle database. Add rules for LCRs to the apply process rule sets. Start the apply process in the Oracle database. At the non-Oracle site, create a procedure that constructs a row LCR, and then enqueues it into the newly created Streams queue. Use this procedure to begin creating and enqueuing LCRs. Non-Oracle to Oracle Streams Configuration Because the user application is responsible for assembling changes at the non-Oracle database into LCRs, and enqueuing the LCRs into a Streams queue at the Oracle database, the application is completely responsible for change capture. The application must enqueue transactions serially in the same order as the transactions committed on the non-Oracle source database. If you want to ensure transactional consistency between the Oracle database where the changes are applied and the non-Oracle database where the changes originate, you must use a transactional Streams queue to stage the LCRs at the Oracle database. The SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package automatically creates a transactional Streams queue. To illustrate why this is important, suppose that a single transaction contains three row changes and the custom application enqueues three row LCRs (one for each change), and then commits. With a transactional queue, a commit is performed by the apply process after the third row LCR, thereby retaining the consistency of the transaction. If you use a nontransactional queue, a commit is performed for each row LCR by the apply process. Oracle Database 11g: Implement Streams

792 Oracle Database 11g: Implement Streams 18 - 792
Non-Oracle to Oracle Streams Configuration (continued) In a non-Oracle to Oracle environment, the apply process functions in the same way that it would in an Oracle-only environment. That is, it dequeues each event from its associated Streams queue based on apply process rules, performs any rule-based transformations, and either sends the event to a handler or applies it directly. Error handling and conflict resolution also function the same as they would in an Oracle-only environment. Thus, you can specify a prebuilt update conflict handler or create a custom conflict handler to resolve conflicts. The apply process must be configured to apply user-enqueued LCRs and not captured LCRs. Therefore, the apply process must be created using the CREATE_APPLY procedure in the DBMS_APPLY_ADM package, and the apply_captured parameter must be set to FALSE when you run this procedure. After the apply process is created, you can use the procedures in the DBMS_STREAMS_ADM package to add rules for LCRs to the apply process rule sets. Oracle Database 11g: Implement Streams

793 Instantiation in a Non-Oracle to Oracle Environment
It is not possible to automatically instantiate tables in an Oracle database whose source tables are in a non-Oracle database. Use the following general steps to manually instantiate the tables: Use a non-Oracle utility to export the table as a flat file at the source site. Create an empty table matching the exported table at the destination site. Use SQL*Loader to load the contents of the flat file into the new table at the destination site. Instantiation in a Non-Oracle to Oracle Environment If the table exists in the Oracle database and is consistent with the table in the non-Oracle database, you do not need to instantiate the table. Oracle Database 11g: Implement Streams

794 Non-Oracle to Non-Oracle Data Sharing
An Oracle database must act as an intermediate database between two non-Oracle databases: The custom user application assembles the changes at one non-Oracle database and enqueues them in the Oracle database. The apply process in an Oracle database applies the changes to a different non-Oracle database by using Heterogeneous Services and a gateway. Non-Oracle to Non-Oracle Data Sharing Streams supports data sharing between two non-Oracle databases through a combination of non-Oracle to Oracle data sharing and Oracle to non-Oracle data sharing. Such an environment uses Streams in an Oracle database as an intermediate database between two non-Oracle databases. For example, a non-Oracle to non-Oracle environment may consist of the following databases: A non-Oracle database named London_rdbms1.net An Oracle database named SF_ORCL.net A non-Oracle database named NY_rdbms3.net A user application assembles the changes at London_rdbms1.net and enqueues them into a queue in SF_ORCL.net. The apply process at SF_ORCL.net then applies the changes to NY_rdbms3.net by using Heterogeneous Services and a gateway. Another apply process at SF_ORCL.net could apply some or all changes in the queue locally at SF_ORCL.net. One or more propagation jobs at SF_ORCL.net could propagate some or all changes in the queue to other Oracle databases. Oracle Database 11g: Implement Streams

795 Oracle Messaging Gateway
The following non-Oracle systems are supported: IBM WebSphere MQ JMS (using Oracle JMS) IBM Websphere MQ Series TIBCO TIB/Rendezvous Oracle Propagation engine MQ driver JDBC Oracle Messaging Gateway Oracle Messaging Gateway enables integration of Oracle-based applications with third-party message queuing–based applications. It provides automatic queue-to-queue propagation from an Oracle queue to a third-party queue, and from a third-party queue to an Oracle queue. Oracle Messaging Gateway enables communication with CICS, AS400, and other MQ Series–accessible applications. It provides guaranteed automatic message propagation between MQ Series and Oracle Streams or Oracle Streams AQ queues in an Oracle database. Oracle Streams queues can be accessed securely from the Internet, thus Internet-enabling your CICS and AS400 applications. For the gateway agent to propagate messages to and from a non-Oracle messaging system, a messaging system link, which represents a communication channel between the agent and the non-Oracle messaging system, must be created. Multiple messaging system links can be configured in the agent. Refer to the Oracle Streams Advanced Queuing User’s Guide and Reference for more information about Oracle Messaging Gateway. Messaging Gateway agent IBM Websphere MQ Series Oracle Database 11g: Implement Streams

796 Oracle Database 11g: Implement Streams 18 - 796
XML LCRs Streams has an XML schema for its LCR. Oracle supports multiple techniques for enqueuing these XML LCR messages into the Streams queues: OCI PL/SQL JMS Some third-party vendors of heterogeneous database change capture also publish their own XML schemas for changes. XML LCRs Streams defines an XML schema for its LCR. Oracle also supports multiple techniques for enqueuing these XML LCR messages into the Streams queues: Oracle Call Interface (OCI), PL/SQL, and Java Message Service (JMS). Some third-party vendors also define an XML schema as a format for customers to access their concept of a “logical change record” or change event. If the XML schemas are similar enough, it should be a straightforward process to use XML libraries to map the vendor’s LCR to the Oracle Streams LCR. The application developer is responsible for the translation of data from nonrelational systems to the Streams LCR format. The definitions of the Streams LCR XML schema are available in Oracle Streams Concepts and Administration. Oracle Database 11g: Implement Streams

797 Oracle Call Interface for XML LCRs
Is the preferred method of enqueuing the non-Oracle changes into the Oracle database Supports a complete set of transactional capabilities Can use OCIAQEnq or OCIAQEnqArray to enqueue XML LCRs Automatically converts the XML LCRs into the internal LCR representation used by Streams Oracle Call Interface for XML LCRs Oracle Call Interface (OCI), in combination with the XML LCR definition, is the preferred method of enqueuing the non-Oracle changes into the Oracle database. A complete set of transactional capabilities is supported in OCI, thereby providing a robust tool set for application developers to manage the captured change processing between the heterogeneous systems. There is no client-side definition of a Streams LCR type for OCI. Instead, OCIAQEnq or OCIAQEnqArray can be used to directly enqueue XML LCRs. Upon enqueue, the XML LCRs are automatically converted into the internal LCR representation used by Streams. OCIAQEnq is used to enqueue a single XML LCR into the Streams queue. To identify the end of a transaction, the OCITransCommit function is called. OCIAQEnq is available for both Oracle9i, Release 2 and Oracle Database 10g. A greater performance optimization option is available in Oracle Database 10g with the OCIAQEnqArray function. With this function, the XML LCRs for a single transaction can be batched into an array and enqueued as a single operation. This is the preferred function for use in Oracle Database 10g implementations. Oracle Database 11g: Implement Streams

798 PL/SQL Interface for LCRs
Enqueues a single LCR or an array of LCRs Can be invoked from OCI programs Works with user-constructed LCRs, not XML LCRs Uses procedures in the DBMS_AQ package PL/SQL Interface for LCRs It is also possible to use OCI to invoke PL/SQL to enqueue a single LCR or an array of LCRs. The PL/SQL enqueue procedures enqueue explicitly constructed LCRs, not XML LCRs. To use the DBMS_AQ.ENQUEUE and DBMS_AQ.ENQUEUE_ARRAY procedures, see the example “Constructing and Enqueuing LCRs” in Chapter 9 (“Managing LCRs”) of the Oracle Streams Replication Administrator’s Guide. DBMS_AQ.ARRAY_ENQUEUE is available only in the Oracle Database 10g release. DBMS_AQ.ENQUEUE is supported in both Oracle9i, Release 2 and Oracle Database 10g. Oracle Database 11g: Implement Streams

799 Java Message Service (JMS)
Java Message Service (JMS) is a messaging standard defined by Sun Microsystems, Oracle, IBM, and other vendors (javax.jms package). Oracle Java Message Service (OJMS) provides a Java API for Oracle, based on the JMS standard (oracle.jms package): Supports the standard JMS interfaces Contains extensions to support administrative operations and other features that are not part of the standard JMS support for Oracle Streams is available in both Oracle9i Database, Release 2 and Oracle Database 10g. Java Message Service (JMS) The Java Message Service API has been developed by Sun in close cooperation with the leading enterprise messaging vendors. JMS is a set of interfaces and associated semantics that define how a JMS client accesses the facilities of an enterprise messaging product. The Oracle Java Message Service package, oracle.jms, provides a set of interfaces and associated semantics based on the JMS standard. These interfaces define how a JMS client accesses the facilities of an enterprise messaging product such as Oracle Streams or Oracle Advanced Queuing. Oracle supports the standard JMS interfaces and has extensions to support the Oracle Streams and Oracle Streams AQ administrative operations, as well as other features that are not included in the public standard. A sample code for enqueuing an LCR and a user message with JMS is provided in the Sample Code section for Oracle Streams on Oracle TechNet ( Although JMS Streams support is available in both Oracle9i Database, Release 2 and Oracle Database 10g, it does not perform as well as the other techniques listed in this lesson. Oracle Database 11g: Implement Streams

800 Oracle Database 11g: Implement Streams 18 - 800
For More Information Messaging Gateway and PL/SQL Enqueue API: Oracle Streams Advanced Queuing User’s Guide and Reference Heterogeneous support: Oracle Heterogeneous Connectivity Administrator’s Guide Oracle Transparent Gateway Administrator’s Guide Integration applications: Streams Advanced Queuing Java API Reference Java Developer's Guide Oracle Call Interface Programmer’s Guide Oracle Database 11g: Implement Streams

801 Oracle Database 11g: Implement Streams 18 - 801
Summary In this lesson, you should have learned to: Describe how Streams works in a heterogeneous environment Describe how Oracle Streams can be used to integrate data from other systems Oracle Database 11g: Implement Streams

802 Common Streams Error Messages

803 Oracle Database 11g: Implement Streams 18 - 803
Objectives After completing this lesson, you should be able to use error messages for the troubleshooting of: Capture process Propagation Apply process Streams Advanced Queuing Oracle Database 11g: Implement Streams

804 Troubleshooting Capture: ORA-00902 Error
ORA-00902: invalid data type The capture process is aborted. Trace file shows ORA Review rules to verify that rules are defined only on tables with columns of supported data types. Remove rules for the table that is generating the error. ORA Error If your capture process is aborted, you must first look for errors for the capture process. The ORA error indicates that the capture process tried to capture a change for an unsupported data type. Query the DBA_STREAMS_UNSUPPORTED view to see whether any tables that are configured for replication by using Oracle Streams are listed in the view. You can also view the capture process trace files to find the object ID of the table that contains the unsupported data type. To clear the error, remove all rules for this table from the capture process rule set, and then restart the capture process. The ORA error :Invalid data type for column error identifies the column with the unsupported data type. Oracle Database 11g: Implement Streams

805 Troubleshooting Capture: ORA-00258 Error
ORA-00258: manual archiving in NOARCHIVELOG mode must identify log Verify that archive logging is configured for the source database. SQL> SELECT name, log_mode FROM V$DATABASE; NAME LOG_MODE SITE2 NOARCHIVELOG ORA Error Check the following: NAME is the SID for the database. LOG_MODE is ARCHIVELOG when archive logging is enabled. Oracle Database 11g: Implement Streams

806 Oracle Database 11g: Implement Streams 18 - 806
Common Capture Errors ORA-00258 ORA-00902 ORA and ORA-01323 ORA-1280 Common Capture Errors The ORA and ORA errors are covered in the lesson titled “Configuring a Capture Process.” They are not covered here. ORA-1280 is a generic error that indicates that log miner has failed. Check the capture trace files for further information. Sometimes status messages are returned (such as 1291) in the trace file. These status messages have the same meaning as the associated ORA messages. For example, a status 1291 is the same as ORA-1291, which indicates a missing log file. Oracle Database 11g: Implement Streams

807 ORA-01291 and ORA-01323 Capture Errors
ORA-01291: "missing logfile" ORA-01323: "invalid state" Indicates that an archived log file cannot be opened by the capture process First SCN Start SCN Required checkpoint SCN Applied SCN ORA and ORA Capture Errors When a capture process is started or restarted, it may need to scan redo log files that were generated before the log file that contains the start SCN. A capture process must scan these records to keep track of DDL changes to database objects. You can query the DBA_CAPTURE view to determine the first SCN and start SCN for a capture process. Removing required redo log files before they are scanned by a capture process causes the capture process to abort and writes the ORA error to a capture process trace file. If you see this error: Check the V$LOGMNR_LOGS view to determine the missing SCN range and restore the relevant redo log files Query the REQUIRED_CHECKPOINT_SCN column in DBA_CAPTURE to determine the required checkpoint SCN for a capture process. Then restore the redo log file that includes the required checkpoint SCN and all subsequent redo log files, or re-create the capture process. When log files are missing or unavailable, the V$STREAMS_CAPTURE STATE column can indicate the last SCN that was mined, rather than aborting with the ORA error. Use this SCN information to identify any log file threads that may not be present at the capture database. 868 890 924 957 992 1025 1053 Oracle Database 11g: Implement Streams

808 Oracle Database 11g: Implement Streams 18 - 808
ORA and ORA Capture Errors (continued) For an Oracle Database 10g instance of version , you may get an ORA-01323: "invalid state" error instead. Typically, this error also indicates that an archived redo log file cannot be found. Use the same methods as described on the previous page to resolve this error. The STATE column does not show the SCN. It can have the following values: INITIALIZING CAPTURING CHANGES EVALUATING RULE ENQUEUING MESSAGE SHUTTING DOWN ABORTING CREATING LCR WAITING FOR DICTIONARY REDO WAITING FOR REDO PAUSED FOR FLOW CONTROL DICTIONARY INITIALIZATION WAITING FOR APPLY TO START CONNECTING TO APPLY DATABASE WAITING FOR PROPAGATION TO START Oracle Database 11g: Implement Streams

809 Common Propagation Errors
ORA-12154: TNS:could not resolve service name. ORA-12514: TNS:listener does not currently know of service requested in connect descriptor. ORA-12541: TNS TNS:no listener ORA unsupported configuration for propagation of buffered messages. Common Propagation Errors The most common propagation errors result from an incorrect network configuration. The errors displayed in the slide are caused by the tnsnames.ora file or database links that are being configured incorrectly. The tnsnames.ora file is a network configuration file that contains net service names mapped to connect descriptors for the local naming method, or net service names mapped to listener protocol addresses. A net service name is an alias mapped to a database network address contained in a connect descriptor. A connect descriptor contains the location of the listener through a protocol address and the service name of the database to which to connect. Clients and database servers use the net service name when making a connection to a database. The TNS (ORA-12154) error means that Oracle Net Services could not find the net service name in the tnsnames.ora file for local naming method, or could not resolve the connect descriptor for other naming methods. To resolve the problem, do as follows: For database links, make sure the string that is used in the USING clause matches the Oracle Net Services name that is used in the tnsnames.ora file. Some systems are case-sensitive. Verify that an entry exists in the tnsnames.ora file for the database to which you are trying to connect. Make sure that you follow the proper syntax and placement of connection clauses in the tnsnames.ora file. Oracle Database 11g: Implement Streams

810 Oracle Database 11g: Implement Streams 18 - 810
Common Propagation Errors (continued) The TNS (ORA-12541) error typically indicates that the listener at the destination site was not started or the specified destination address is incorrect. Verify that the listener is running on the host to which you are trying to connect (use lsnrctl status). ORA unsupported configuration for propagation of buffered messages: This error is encountered in a RAC environment and typically indicates that an attempt was made to propagate buffered messages with the database link pointing to an instance in the destination database, which is not the owner instance of the destination queue. To resolve, use queue-to-queue propagation for buffered messages and confirm that the correct service name for the queue is available. Oracle Database 11g: Implement Streams

811 Oracle Database 11g: Implement Streams 18 - 811
Common Apply Errors ORA-00001: unique constraint (%s.%s) violated ORA-02291: parent key not found ORA-02292: child record found ORA-01031: insufficient privileges ORA-06550: line x, column y: ORA-23416: table does not contain a primary key constraint ORA-23607: invalid column ORA-26687: no instantiation SCN provided ORA-26688: missing key in LCR ORA-26688: metadata mismatch ORA-26689: column datatype mismatch in LCR ORA-26786: no data found (on top of ORA-01403) ORA-26787: no data found (on top of ORA-01403) Common Apply Errors In past releases, an ORA error was returned for delete and update conflicts. In Oracle Database 11g, two new error messages make it easier to handle apply errors in DML handlers and error handlers. They appear on top of the ORA error. An ORA error is raised if the row to be updated or deleted does not exist in the target table. An ORA error is raised when the row exists in the target table, but the values of some columns do not match those of the row LCR. The following errors can be viewed as common data conflict errors: Unique conflicts ORA-00001: unique constraint (%s.%s) violated Foreign key or ordering conflicts ORA-02291: parent key not found ORA-02292: child record found Update conflicts ORA-26786: no data found (on top of ORA-01403) Delete conflicts ORA-26787: no data found (on top of ORA-01403) Oracle Database 11g: Implement Streams

812 Common Data Conflict Errors
Unique conflicts ORA-00001: unique constraint (%s.%s) violated Foreign key or ordering conflicts ORA-02291: parent key not found ORA-02292: child record found Common Data Conflict Errors An apply process detects a uniqueness conflict if a uniqueness constraint violation occurs when applying an LCR that contains an insert or update operation. This is indicated with the ORA error. You can use a DML or error handler to handle these errors. Transactions that are applied in a different sequential order may experience referential integrity problems at a remote database, if the supporting data has not been successfully propagated to that database. These errors can generate either an ORA or ORA error. If an ordering conflict is encountered, you can resolve the conflict by reexecuting the transaction in the error queue after the required data has been propagated to the remote database and applied. Oracle Database 11g: Implement Streams

813 ORA-01031: Insufficient Privileges
The designated apply user must have the privileges to perform SQL on the replicated objects.  Grant privileges explicitly to the apply user by using the GRANT command. Additional privileges for DDL commands may be needed. To resolve the error, perform the following steps: Determine the missing privilege. Grant the privileges directly to the apply user. Reexecute the transaction from the error queue. ORA-01031: Insufficient Privileges An ORA error indicates the user designated as the apply user is missing one or more privileges required for a particular DML or DDL operation on a replicated object. This is an easy error to recover from after the appropriate privileges have been granted. The apply user privileges must be granted by an explicit grant of each privilege. Granting these privileges through a role is not sufficient for the streams apply user. In particular, the privileges required are the following: For table level changes: INSERT, UPDATE, DELETE, and SELECT privileges on the specific table For DDL rules on tables: ALTER TABLE for the specific table, or the ALTER ANY TABLE system privilege To implement schema level changes: CREATE ANY TABLE, CREATE ANY INDEX, CREATE ANY PROCEDURE, ALTER ANY TABLE, and ALTER ANY PROCEDURE system privileges To implement global level changes: The ALL PRIVILEGES system privilege Oracle Database 11g: Implement Streams

814 Oracle Database 11g: Implement Streams 18 - 814
ORA-01031: Insufficient Privileges (continued) To fix the problem: 1. Ensure that the required privileges have been granted: SELECT * FROM SESSION_PRIVS; Make sure that the privilege was explicitly granted, even if the privilege is visible in the V$SESSION_PRIVS view. 2. If you cannot determine how the privilege was granted, or if the privilege is missing, explicitly grant the privilege to the apply user, as in the following example: GRANT CREATE ANY TABLE to strmadmin; 3. Reexecute the error transactions from the DBA_APPLY_ERROR view by using DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS. Alternatively, you can execute an individual error by passing local_transaction_id from DBA_APPLY_ERROR to the DBMS_APPLY_ADM.EXECUTE_ERROR procedure. Oracle Database 11g: Implement Streams

815 Oracle Database 11g: Implement Streams 18 - 815
ORA-06550: Error on Apply An ORA error: Indicates an error within an apply handler or transformation function Typically causes the apply process to abort with no errors in the error queue The trace file for the apply coordinator will report the full error stack. ORA in STREAMS process ORA-12801: error signaled in parallel query server P000 ORA-06550: line 1, column 15: PLS-00201: identifier 'HR.HR_TO_DEMO' must be declared … ORA-06550: Error on Apply One of the most common reasons for receiving an ORA error in a DML handler or transformation function is missing privileges. You may see an error in the DBA_APPLY_ERROR view, as shown here: SELECT apply_name, source_database, local_transaction_id, message_number, error_message FROM DBA_APPLY_ERROR; APPLY_NAME SOURCE_DATABASE LOCAL_TRANSACTION_ID MESSAGE_NUMBER ERROR_MESSAGE APPLY_SITE1_LCRS SITE1.NET ORA-06550: line 13, column 7: Sometimes, this error causes the apply process to abort without errors appearing in the DBA_APPLY_ERROR view. However, the trace file for the apply coordinator will report the error, as shown in the slide. Oracle Database 11g: Implement Streams

816 Oracle Database 11g: Implement Streams 18 - 816
ORA-06550: Error on Apply (continued) If the specified apply user has not been explicitly granted the privilege to execute the DML handler procedure or the transformation function, you can also receive errors similar to those shown in the slide. You can also get this error if there is a syntax error within the apply handler or transformation function procedural code. To resolve the error, perform the following steps (depending on the cause of the error): Grant the missing privileges. Fix the error in the apply handler or transformation function procedural code. After fixing the problem, if the apply process is aborted, you must first restart it. Oracle Database 11g: Implement Streams

817 ORA-23416: Table Does Not Include a Primary-Key Constraint
Confirm that the table has an explicit primary-key constraint: P = primary key C = check constraint U = unique key R = referential integrity SELECT constraint_name,constraint_type FROM DBA_CONSTRAINTS WHERE table_name='JOBS' AND owner='HR'; CONSTRAINT_NAME C JOB_ID_PK P JOB_TITLE_NN C ORA-23416: Table Does Not Include a Primary-Key Constraint A UNIQUE and a not-NULL constraint is not the same as a primary-key constraint. If a primary key does not exist for the table, a substitute key can be used instead. Oracle Database 11g: Implement Streams

818 Oracle Database 11g: Implement Streams 18 - 818
ORA-23607: Invalid Column This error is generated when an invalid column is specified in a function that accesses the column list contained in the LCR. Check the columns in the object and specify the correct column name. lcr.delete_column('DETPNO','*'); ORA-23607: Invalid Column This error is raised by the SYS.LCR$_ROW_RECORD or SYS.LCR$_DDL_RECORD member function when the value of the column_name parameter does not match the name of any of the columns in the LCR. In the example in the slide, a function or procedure is trying to delete a column from an LCR, but the column name is spelled incorrectly. You can encounter this error when using an apply handler or transformation function to: Delete a column from an LCR and the LCR does not have the column Rename a column that does not exist in the LCR In an UPDATE statement, call the GET_VALUE or GET_VALUES member procedures for NEW values in a DML handler or transformation function without explicitly setting the use_old parameter to FALSE To resolve the error, check the column names contained within the LCR and modify the apply handler procedure or transformation function as needed. Oracle Database 11g: Implement Streams

819 ORA-26687: Instantiation SCN Not Set
For apply to execute captured change messages, the target database object must be instantiated. If apply fails with ORA-26687, search for missing instantiation SCNs: DBA_APPLY_INSTANTIATED_OBJECTS (table DML) DBA_APPLY_INSTANTIATED_SCHEMAS (schema DDL) DBA_APPLY_INSTANTIATED_GLOBAL (global DDL) Correct the error by: Setting the instantiation SCN with Data Pump, or the Export and Import utilities Executing the SET_*_INSTANTIATION_SCN procedures to manually set the instantiation SCN ORA-26687: Instantiation SCN Not Set Typically, this error occurs because the instantiation SCN is not set on an object for which an apply process is attempting to apply changes. You can query the DBA_APPLY_INSTANTIATED_[OBJECTS|SCHEMAS|GLOBAL] views to find the objects that do not have instantiation SCNs. You can use Data Pump, Export, or Import to set the instantiation SCNs. Alternatively, you can execute one or more of the following procedures in the DBMS_APPLY_ADM package: SET_TABLE_INSTANTIATION_SCN SET_SCHEMA_INSTANTIATION_SCN SET_GLOBAL_INSTANTIATION_SCN SET_SCHEMA_INSTANTIATION_SCN sets the instantiation SCN that enables the apply process to perform DDL in that schema. With this instantiation SCN set, new tables created at the destination site by the apply process are automatically updated with an instantiation SCN at the apply site after the table is created. However, existing tables at the apply site must have the SCN set explicitly before DML can be applied by the apply process. You can use the recursive parameter of SET_GLOBAL_INSTANTIATION_SCN or SET_SCHEMA_INSTANTIATION_SCN to set the instantiation SCN at the specified level (schema or database) and for all objects at lower levels (schema or table) if a database link for the global_name of the source database is available at the target database. Oracle Database 11g: Implement Streams

820 Oracle Database 11g: Implement Streams 18 - 820
ORA-26688: Missing Key in LCR The error indicates a metadata mismatch: If no primary key exists for the table at the destination site, alter the table to have the same primary key as the source table, or specify substitute key columns by using the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. Make sure that all key columns are being supplementally logged at the source database. Determine which columns are not being supplementally logged at the source site. Use the query in the slide to check table-level supplemental logging, or query V$DATABASE to check database-level supplemental logging: SELECT supplemental_log_data_min, supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_all, force_logging FROM V$DATABASE; Oracle Database 11g: Implement Streams

821 ORA-26689: Column Type Mismatch
Raised when the data types of columns in the LCR are not the same as the data types in the database object Possible causes: The column name is valid but the data types do not match. The LCR contains extra columns. ORA-26689: Column Type Mismatch Typically, this error occurs because one or more columns at a table in the source database do not match with the corresponding columns at the destination database. The LCRs from the source database may contain more columns than the table at the destination database, or there may be a type mismatch for one or more columns. If the columns differ at the databases, you can use rule-based transformations to avoid generating errors. You can also resolve this error by creating a DML or DDL handler to modify the LCR so that its contents match the structure and data types of the destination table. If you use an apply handler or rule-based transformation, make sure that any SYS.AnyData conversion functions match the data type in the LCR that is being converted. For example, if the column is specified as VARCHAR2, use the SYS.AnyData.CONVERTVARCHAR2 function to convert the data from the ANY type to the VARCHAR2 type. This error may also occur if supplemental logging is not specified when required for non-key columns at the source database. In this case, the LCRs generated at the source database do not contain data values for these non-key columns. Oracle Database 11g: Implement Streams

822 Oracle Database 11g: Implement Streams 18 - 822
ORA and ORA Errors Delete conflict error: ORA-26787: no data found (formerly ORA-01403) Update conflict error: ORA-26786: no data found (formerly ORA-01403) Example: ORA-26786: A row with key ("EMPLOYEE_ID") = (191) exists but has conflicting column(s) "SALARY" in table HR.EMPLOYEES ORA-01403: no data found ORA and ORA Errors An apply process detects a delete conflict if it cannot find a row when applying an LCR that contains an update or delete operation because the primary key of the row does not exist. This is indicated with the ORA error. You can use a DML or error handler to handle these errors. An apply process detects an update conflict if there is any difference between the old values for a row in a row LCR and the current values of the same row at the destination database. Because a row with the specific values could not be found, an update conflict is signaled with an ORA error. You can use a prebuilt conflict handler, a DML handler, or an error handler to handle these errors. Note: The ORA error, which was returned in past releases, now appears lower in the error stack. If you have existing DML handlers and error handlers, you may need to modify them. Oracle Database 11g: Implement Streams

823 Oracle Database 11g: Implement Streams 18 - 823
Resolving ORA Errors Typically occurs when you attempt an UPDATE on an existing row and the OLD_VALUES in the LCR do not match the current values at the destination site Are resolved by “fixing” the data so that the LCR can be applied Can also occur if: The primary-key constraint is not specified on the table at the apply site Key columns are not provided from the source supplemental logging Resolving ORA Errors To fix the data at the destination site so that the change can be applied, perform the following steps: 1. Set an apply tag in your current session. EXECUTE DBMS_STREAMS.set_tag('FF'); 2. Modify the data on the local site to match OLD_VALUES. 3. Reexecute the error. EXECUTE DBMS_APPLY_ADM.execute_error('local_txn_id'); 4. Unset the tag. EXECUTE DBMS_STREAMS.set_tag(null) The EXECUTE_ERROR procedure has a parameter called USER_PROCEDURE. You can specify a user-defined procedure that modifies the error transaction so that it can be successfully executed. Specify NULL to execute the error transaction without running a user procedure. Cause of ORA error Solution A missing primary key Alter the table to add a primary-key constraint or use substitute-key columns. Missing supplemental logging information Configure supplemental logging at the source database for the required columns. Oracle Database 11g: Implement Streams

824 Troubleshooting AQ Errors
Troubleshooting and correcting common Streams AQ errors: ORA-1555: snapshot too old: rollback segment number %s with name \"%s\" too small (when dequeuing with the NEXT_MESSAGE navigation option) Suggested action: Use the FIRST_MESSAGE option once every 1,000 messages (executes the cursor again). Troubleshooting AQ Errors ORA-1555 error You might get this error when you dequeue with the NEXT_MESSAGE navigation option. NEXT_MESSAGE uses the snapshot created during the first dequeue call. After that, undo information may not be retained. Suggested action: Use the FIRST_MESSAGE option again to reexecute the cursor and get a new snapshot. Because the FIRST_MESSAGE option does not perform as well as the NEXT_MESSAGE option, it is recommended that you dequeue messages in batches. For example, FIRST_MESSAGE for one, NEXT_MESSAGE for the next 1,000 messages, then FIRST_MESSAGE again, and so on. Oracle Database 11g: Implement Streams

825 Troubleshooting AQ Errors
Troubleshooting and correcting common Streams AQ errors: ORA-25237: navigation option used out of sequence Suggested action: Reset the dequeuing position by using the FIRST_MESSAGE navigation option and then specify the NEXT_MESSAGE or NEXT_TRANSACTION option. ORA-25307: Enqueue rate too high, flow control enabled Suggested action: Try enqueue after waiting for some time. Troubleshooting AQ Errors (continued) ORA error The NEXT_MESSAGE or NEXT_TRANSACTION option is specified after dequeuing all messages. You must reset the dequeue position by using the FIRST_MESSAGE option, if you want to continue dequeuing between services (such as xa_start and xa_end boundaries). This is because XA cancels the cursor fetch state after an xa_end. If you do not reset, you get an error message stating that the navigation is used out of sequence. Suggested action: Reset the dequeuing position by using the FIRST_MESSAGE navigation option and then specify the NEXT_MESSAGE or NEXT_TRANSACTION option. ORA error Flow control has been enabled for the message sender. This means that the fastest subscriber of the sender’s message is not able to keep pace with the rate at which messages are enqueued. The buffered messaging application must handle this error and attempt again to enqueue messages after waiting for some time. Suggested action: Try enqueue after waiting for some time. Oracle Database 11g: Implement Streams

826 ORA-24093: AQ Agent Not Granted Privileges of Database User
To manually configure secure queue access: DECLARE subscriber SYS.AQ$_AGENT; BEGIN DBMS_AQADM.CREATE_AQ_AGENT('OE_AGNT'); DBMS_AQADM.ENABLE_DB_ACCESS('OE_AGNT','OE'); subscriber := SYS.AQ$_AGENT('OE_AGNT', NULL,NULL); SYS.DBMS_AQADM.ADD_SUBSCRIBER( queue_name => 'ix.streams_queue', subscriber => subscriber); END; / GRANT EXECUTE ON DBMS_AQ TO oe; ORA-24093: AQ Agent Not Granted Privileges of Database User For a user to perform queue operations such as enqueue and dequeue on a secure queue, the user must be configured as a secure queue user of the queue. If you use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to create the secure queue, the queue owner and the user specified by the queue_user parameter are automatically configured as secure users of the queue. If you want to enable other users to perform operations on the queue, you can configure these users in one of the following ways: Run SET_UP_QUEUE and specify a queue_user. Queue creation is skipped if the queue already exists, but a new queue user is configured if one is specified. Associate the user with an AQ agent manually. To associate a user with an AQ agent manually, perform the following steps: 1. Create an agent by using DBMS_AQADM.CREATE_AQ_AGENT. Oracle Database 11g: Implement Streams

827 Oracle Database 11g: Implement Streams 18 - 827
ORA-24093: AQ Agent Not Granted Privileges of Database User (continued) 2. If the user must be able to dequeue messages from queue, make the agent a subscriber of the secure queue: DECLARE subscriber SYS.AQ$_AGENT; BEGIN subscriber := SYS.AQ$_AGENT('oe_agnt', NULL, NULL); DBMS_AQADM.ADD_SUBSCRIBER( queue_name => 'ix.streams_queue', subscriber => subscriber, rule => NULL, transformation => NULL); END; /   3. Associate the user with the agent by using DBMS_AQADM.ENABLE_DB_ACCESS. 4. Grant the user the EXECUTE privilege on the DBMS_STREAMS_MESSAGING or DBMS_AQ package, if the user is not already granted these privileges. 5. Grant the user specific privileges to perform queue operations, such as enqueue and dequeue privileges. To disable a secure queue user, you can revoke ENQUEUE and DEQUEUE privileges on the queue from the user, or you can run the DISABLE_DB_ACCESS procedure in the DBMS_AQADM package. For Streams messaging, the user who creates a messaging client is granted the privileges to dequeue from the queue using the messaging client and can dequeue messages that satisfy the messaging client rule sets. When you generate rules by using procedures in the DBMS_STREAMS_ADM package, the specified Streams client is automatically configured as subscriber for the specified queue, if that client performs dequeue operations. The rule sets for all subscribers to a queue are combined into a single system-created rule set to make the subscription more efficient. Oracle Database 11g: Implement Streams

828 ORA-24033: No Recipients for Message
You cannot enqueue a message into a queue unless there is a user or rule configured to receive the message. To fix the problem: Add a rule for the queue that evaluates to TRUE for the message (propagation, apply, or messaging client) Specify a recipient when enqueuing the message by configuring the message properties (DBMS_AQADM) ORA-24033: No Recipients for Message The ORA error is raised when an enqueue is performed on a queue that allows multiple consumers, such as a Streams staging queue, but there were neither explicit recipients specified in the enqueue call nor were any queue subscribers created as recipients for this message. A subscriber is an agent authorized by a Streams administrator to retrieve messages from a queue. When you create propagation rules, the propagation is created as a subscriber to the source queue and the rules indicate which messages the propagation is interested in. Similarly, when you create rules for apply, the apply process is also configured as a subscriber for the specified queue. If a message is enqueued into a queue and there is no specified recipient for the message or no subscriber created for the message, the message is discarded because the message cannot be delivered. To avoid this error, there must be at least one subscriber created for the queue into which you enqueue messages or you must configure a recipient list in the message properties type by using DBMS_AQADM.ENQUEUE and the MESSAGE_PROPERTIES_T type. Refer to the Oracle Streams Advanced Queuing User’s Guide and Reference documentation for more details about specifying recipient lists. Oracle Database 11g: Implement Streams

829 ORA-25224: Sender Name Must Be Specified
If using DBMS_AQ.ENQUEUE to enqueue messages: Configure sender_id in the message properties before the message is enqueued sender_id is an AQ agent. Specified AQ agent has secure queue privileges. Verify privileges with DBA_AQ_AGENT_PRIVS SELECT agent_name, db_username FROM DBA_AQ_AGENT_PRIVS; AGENT_NAME DB_USERNAME ORDERS_DEQ IX HR HR HR_CAPTURE STRMADMIN ORA-25224: Sender Name Must Be Specified for Enqueue into Secure Queues You cannot explicitly enqueue messages or LCRs into a staging queue unless a subscriber exists for the message. To enqueue into a secure queue, the sender_id message property attribute must be set to an agent that has secure queue privileges for the queue. To specify a sender_id that is an AQ agent (for example, ORDERS_DEQ), use the following code fragment when enqueuing a message: SYS.AQ$_AGENT(name => 'ORDERS_DEQ, address => null, protocol => null) Example of Procedure Using an AQ Agent CREATE OR REPLACE PROCEDURE enq_proc (msg IN SYS.AnyData) IS enqopt DBMS_AQ.ENQUEUE_OPTIONS_T; mprop DBMS_AQ.MESSAGE_PROPERTIES_T; enq_eventid RAW(16); rcpt_list DBMS_AQ.aq$_recipient_list_t; agent SYS.AQ$_AGENT; Oracle Database 11g: Implement Streams

830 Oracle Database 11g: Implement Streams 18 - 830
ORA-25224: Sender Name Must Be Specified for Enqueue into Secure Queues (continued) Example of Procedure Using an AQ Agent (continued) BEGIN agent := SYS.AQ$_AGENT( name => 'ORDERS_DEQ', address => null, protocol => null); mprop.sender_id := agent; DBMS_AQ.ENQUEUE( queue_name => 'ix.streams_queue', enqueue_options => enqopt, message_properties => mprop, payload => msg, msgid => enq_eventid); COMMIT; END; / Oracle Database 11g: Implement Streams


Download ppt "Introduction to Oracle Database 11g: Implement Streams"

Similar presentations


Ads by Google