Presentation is loading. Please wait.

Presentation is loading. Please wait.

Carl Dudley - Staffordshire University1 Oracle LogMiner Carl Dudley Staffordshire University, UK EOUG SIG Director UKOUG SIG Director

Similar presentations


Presentation on theme: "Carl Dudley - Staffordshire University1 Oracle LogMiner Carl Dudley Staffordshire University, UK EOUG SIG Director UKOUG SIG Director"— Presentation transcript:

1 Carl Dudley - Staffordshire University1 Oracle LogMiner Carl Dudley Staffordshire University, UK EOUG SIG Director UKOUG SIG Director cdudley@staffs.ac.uk

2 Carl Dudley - Staffordshire University2 Oracle8i LogMiner Used to analyze redo log (including archived log) information Can help to solve Database Management issues by supporting – Fine-grained recovery – Auditing – Tuning – Application Debugging Provides similar functionality to the established 3rd party offerings – Much better than ALTER SYSTEM DUMP LOGFILE;

3 Carl Dudley - Staffordshire University3 Data Dictionary The LogMiner Environment Online Log Files Fixed (v$) View Dictionary File ~~~~~~~~~~~ … Archive Log Files LogMiner

4 Carl Dudley - Staffordshire University4 Redo Log Analysis DBAs can pinpoint when a logical corruption has occurred –Can be used to find limits (stop points) for incomplete recovery Fine-grained recovery can be used to perform table level undo –Prevents the need for a restore of the table to a previous state followed by a roll forward SQL statements are reconstructed and can be seen in v$logmnr_contents –sql_undo holds the SQL statement which would undo the change –sql_redo holds the SQL statement which would redo the change

5 Carl Dudley - Staffordshire University5 The LogMiner Dictionary The LogMiner requires a dictionary file to be built – Must first set a value for UTL_FILE_DIR in the parameter file – It uses the UTL_FILE package to build the dictionary Dictionary file is created by the dbms_logmnr_d package – Extracts the data dictionary into an external (dictionary) file in the directory pointed to by UTL_FILE_DIR – Has only one public procedure called build The dbms_logmnr_d package is built by running the script dbmslogmnrd.sql as sys ( dbmslmd.sql on 8.1.6) PROCEDURE build (dictionary_filename IN VARCHAR2,dictionary_location IN VARCHAR2);

6 Carl Dudley - Staffordshire University6 Building the LogMiner Dictionary EXECUTE dbms_logmnr_d.build (dictionary_filename => ‘orcldict.ora’,dictionary_location => ‘c:\lmnr’); To monitor the dictionary build, you can issue :- SET SERVEROUTPUT ON Open the database whose logs you wish to analyse

7 Carl Dudley - Staffordshire University7 The Log of the Dictionary Creation SQL> execute dbms_logmnr_d.build('test_dictionary7.ora','c:\rmanback'); LogMnr Dictionary Procedure started LogMnr Dictionary File Opened TABLE: OBJ$ recorded in LogMnr Dictionary File TABLE: TAB$ recorded in LogMnr Dictionary File TABLE: COL$ recorded in LogMnr Dictionary File TABLE: SEG$ recorded in LogMnr Dictionary File TABLE: UNDO$ recorded in LogMnr Dictionary File TABLE: UGROUP$ recorded in LogMnr Dictionary File TABLE: TS$ recorded in LogMnr Dictionary File TABLE: CLU$ recorded in LogMnr Dictionary File TABLE: IND$ recorded in LogMnr Dictionary File TABLE: ICOL$ recorded in LogMnr Dictionary File TABLE: LOB$ recorded in LogMnr Dictionary File TABLE: USER$ recorded in LogMnr Dictionary File TABLE: FILE$ recorded in LogMnr Dictionary File TABLE: PARTOBJ$ recorded in LogMnr Dictionary File TABLE: PARTCOL$ recorded in LogMnr Dictionary File TABLE: TABPART$ recorded in LogMnr Dictionary File TABLE: INDPART$ recorded in LogMnr Dictionary File TABLE: SUBPARTCOL$ recorded in LogMnr Dictionary File TABLE: TABSUBPART$ recorded in LogMnr Dictionary File TABLE: INDSUBPART$ recorded in LogMnr Dictionary File TABLE: TABCOMPART$ recorded in LogMnr Dictionary File TABLE: INDCOMPART$ recorded in LogMnr Dictionary File Procedure executed successfully - LogMnr Dictionary Created

8 Carl Dudley - Staffordshire University8 The Need for the Dictionary File The dictionary file is used to translate internal Object Identifiers and data types to Object names and external data format – Without this, it will show only internal object_ids and hexadecimal data in the sql_undo and sql_redo columns in v$logmnr_contents insert into UNKNOWN.Objn:2875(Col[1],Col[2],Col[3],Col[4], Col[5],Col[6],Col[7],Col[8]) values (HEXTORAW('c24f28'), HEXTORAW('4b494e47202020202020'),HEXTORAW('505245534944454e54'),NULL,HEXTORAW('77b50b11010101'),HEXTORAW('c233'),NULL,HEXTORAW('c10b')); Changes to database schema require a new dictionary file to be built

9 Carl Dudley - Staffordshire University9 Contents of the Dictionary File CREATE_TABLE DICTIONARY_TABLE ( DB_NAME VARCHAR2(9), DB_ID NUMBER(20), INSERT_INTO DICTIONARY_TABLE VALUES ('ORAC',665102398,'05/14/2000 20:54 CREATE_TABLE OBJ$_TABLE (OBJ# NUMBER(22), DATAOBJ# NUMBER(22), INSERT_INTO OBJ$_TABLE VALUES (47,47,0,'I_CDEF1',4,'',1,to_date('0... INSERT_INTO OBJ$_TABLE VALUES (17,17,0,'FILE$',1,'',2,to_date('02/... INSERT_INTO OBJ$_TABLE VALUES (15,15,0,'UNDO$',1,'',2,to_date('02/... INSERT_INTO OBJ$_TABLE VALUES (45,45,0,'I_CON1',4,'',1,to_date('02... INSERT_INTO OBJ$_TABLE VALUES (5,2,0,'CLU$',1,'',2,to_date('02/27/... INSERT_INTO OBJ$_TABLE VALUES (38,38,0,'I_FILE1',4,'',1,to_date('0... INSERT_INTO OBJ$_TABLE VALUES (37,37,0,'I_ICOL1',4,'',1,to_date('0... INSERT_INTO OBJ$_TABLE VALUES (40,40,0,'I_TS1',4,'',1,to_date('02/... INSERT_INTO OBJ$_TABLE VALUES (53,53,0,'BOOTSTRAP$',1,'',2,to_date... Note the underscore between the first two keywords – Prevents tables being created by inadvertent execution of the script

10 Carl Dudley - Staffordshire University10 The dbms_logmnr Package Created by dbmslogmnr.sql –(dbmslm.sql/prvtlm.plb on 8.1.6) Has three procedures – add_logfile -- Includes or removes a logfile in the list of logs to be analyzed – start_logmnr -- Initiates the analysis of the logs – end_logmnr -- Closes the logminer session

11 Carl Dudley - Staffordshire University11 Examining the Redo Log Files The LogMiner must first be given a list of log files to analyze EXECUTE dbms_logmnr.add_logfile (Options => dbms_logmnr.NEW,logfilename => ‘log1orcl.ora’); This action adds log2orcl.ora to the list of logs to be analyzed This action clears any existing list of logs to be analyzed and starts a new list, with log1orcl.ora as first in the list EXECUTE dbms_logmnr.add_logfile (Options => dbms_logmnr.ADDFILE,logfilename => ‘log2orcl.ora’); EXECUTE dbms_logmnr.add_logfile (Options => dbms_logmnr.REMOVEFILE,logfilename => ‘log1orcl.ora’); This action removes log1orcl.ora from the list of logs to be analyzed

12 Carl Dudley - Staffordshire University12 Examining the Redo Log Files (2) Include all (three) online redo log files for analysis –Database can be in OPEN, MOUNT or NOMOUNT state EXECUTE dbms_logmnr.add_logfile (Options => dbms_logmnr.NEW,logfilename => ‘log1orcl.ora’); EXECUTE dbms_logmnr.add_logfile (Options => dbms_logmnr.ADDFILE,logfilename => ‘log2orcl.ora’); EXECUTE dbms_logmnr.add_logfile (Options => dbms_logmnr.ADDFILE,logfilename => ‘log3orcl.ora’); Evidence of logs chosen for analysis can be found in v$logmnr_logs

13 Carl Dudley - Staffordshire University13 Automatic Log List Generation (1) Procedure to generate list of logs between specified periods –Important to limit analysis to conserve memory in UGA CREATE OR REPLACE PROCEDURE log_spec (log_dir IN VARCHAR2, log_list IN VARCHAR2, start_period VARCHAR2, end_period VARCHAR2) AS sql_stmt VARCHAR2(4000); file_out UTL_FILE.FILE_TYPE; counter NUMBER(4) := 1; file_buff VARCHAR2(2000); CURSOR log_cur IS SELECT name FROM v$archived_log WHERE first_time BETWEEN TO_DATE(start_period,'dd-mon-yyyy hh24:mi:ss') AND TO_DATE(end_period,'dd-mon-yyyy hh24:mi:ss') OR next_time BETWEEN TO_DATE(start_period,'dd-mon-yyyy hh24:mi:ss') AND TO_DATE(end_period,'dd-mon-yyyy hh24:mi:ss');

14 Carl Dudley - Staffordshire University14 Automatic Log List Generation (2) BEGIN file_out := UTL_FILE.FOPEN(log_dir,log_list,'w'); FOR log_rec IN log_cur LOOP IF counter = 1 THEN file_buff := 'DBMS_LOGMNR.ADD_LOGFILE ('''||LOG_REC.NAME||''',DBMS_LOGMNR.NEW);'; ELSE file_buff := 'DBMS_LOGMNR.ADD_LOGFILE ('''||LOG_REC.NAME||''',DBMS_LOGMNR.ADDFILE);'; END IF; UTL_FILE.PUT_LINE(file_out,'BEGIN'); UTL_FILE.PUT_LINE(file_out,file_buff); UTL_FILE.PUT_LINE(file_out,'END;'); UTL_FILE.PUT_LINE(file_out,'/'); counter := counter + 1; END LOOP; UTL_FILE.FCLOSE(file_out); END;

15 Carl Dudley - Staffordshire University15 Automatic Log List Generation (3) BEGIN DBMS_LOGMNR.ADD_LOGFILE('D:\ORACLE\ORADATA\ORAC\ARCHIVE\ORACT001S00192.ARC‘,DBMS_LOGMNR.NEW); END; / BEGIN DBMS_LOGMNR.ADD_LOGFILE('D:\ORACLE\ORADATA\ORAC\ARCHIVE\ORACT001S00193.ARC‘,DBMS_LOGMNR.ADDFILE); END; /... BEGIN log_spec(log_dir => ‘c:\logminer’, log_list => ‘listlogs.sql’,start_period => ‘21-JAN-2001 13:45:00’,end_period => ‘21-JAN-2001 14:45:00’); END; Contents of listlogs.sql Generate a list of the target logs in listlogs.sql in the UTL_FILE_DIR directory

16 Carl Dudley - Staffordshire University16 Starting the Logminer When conducting an analysis, the LogMiner can be given a list of log files or a period of activity to analyze EXECUTE dbms_logmnr.start_logmnr (Dictfilename => ‘orcldict.ora’,StartTime => ‘01-dec-1999 09:00:00’,EndTime => ‘01-dec-1999 09:30:00’); This action populates v$logmnr_contents with information on activity between 9:00 and 9:30am on 1st Dec 1999

17 Carl Dudley - Staffordshire University17 Viewing LogMiner Information A simple example SELECT sql_redo,sql_undo FROM v$logmnr_contents; SQL_REDO -------- delete from EMP where EMPNO = 7777 and ROWID = `AAACOOAEBAADPCACA'; SQL_UNDO -------- insert into EMP(EMPNO, SAL) values (7777,1500) insert into UNKNOWN.Objn:2875(Col[1],Col[2],Col[3],Col[4], Col[5],Col[6],Col[7],Col[8]) values (HEXTORAW('c24f28'), HEXTORAW('4b494e47202020202020'),HEXTORAW('505245534944454e54'),NULL,HEXTORAW('77b50b11010101'),HEXTORAW('c233'),NULL,HEXTORAW('c10b')); Without a dictionary file, you can expect to see this kind of output

18 Carl Dudley - Staffordshire University18 Viewing Logminer Information SELECT username,scn,TO_CHAR(timestamp,'dd-mon-yyyy hh24:mi:ss') time,sql_redo,sql_undo FROM v$logmnr_contents WHERE username = 'FRED' AND TO_CHAR(timestamp,'dd-mon-yyyy hh24:mi:ss') BETWEEN ’01-DEC-1999 09:02:00' AND ’01-DEC-1999 09:04:00'; Tracking Fred’s activity at a particular time

19 Carl Dudley - Staffordshire University19 Limitations/Features of the Logminer (1) Cannot cope with Objects and chained/migrated rows UPDATE emp SET sal = 4000 WHERE deptno = 10; 3 rows updated. SQL_UNDO ------------------------------------------------------------ update EMP set SAL = 2450 where rowid = `AAACOOAEBAADPCACI'; update EMP set SAL = 1300 where rowid = `AAACOOAEBAADPCACN'; update EMP set SAL = 5000 where rowid = `AAACOOAEBAADPCACG'; SQL_UNDO --------------------------------- Unsupported (Chained Row) Log Miner generates low-level SQL, not what was actually issued – Cannot fully trace an application -- If an update statement updates three rows, three separate row-level update statements are shown in SQL_UNDO column

20 Carl Dudley - Staffordshire University20 Limitations/Features of the Logminer (2) Requires an up to date dictionary file to produce meaningful output DDL is not supported (e.g. CREATE TABLE ) – Cannot access SQL on the data dictionary in a visible form – Shows updates to base dictionary tables due to DDL operations -- Evidence of CREATE TABLE can be seen as an insert into tab$ Reconstructed SQL is generated only for DELETE, UPDATE, INSERT and COMMIT ROLLBACK does not generate any data for sql_undo – A rollback flag is set instead SQL_REDOSQL_UNDOROLLBACK ----------------------------------- insert...delete...0 delete... 1

21 Carl Dudley - Staffordshire University21 Limitations/Features of the LogMiner (3) All LogMiner memory is in the PGA – so all information is lost when the LogMiner session is closed – LogMiner cannot be used in an MTS environment – v$logmnr_contents can be seen only by the LogMiner session – Make permanent with CTAS -- Avoids having to reload the information when in a new session CREATE TABLE logmnr_tab01 AS SELECT * FROM v$logmnr_contents; v$logmnr_contents is not currently indexed – May be efficient to CTAS and build indexes on the table

22 Carl Dudley - Staffordshire University22 Analysing Oracle 8.0 Logfiles Logminer can build a dictionary file from an Oracle 8.0 database using dbms_logmnr_d.build The dictionary file can be used to analyse Oracle 8.0 logs – All analysis must be done while connected to an Oracle 8.1 database Logminer can be used against ANY ‘foreign’ version 8.x database with the appropriate ‘foreign’ dictionary file – Database must have the same character set and hardware platform

23 Carl Dudley - Staffordshire University23 Calculating Table Access Statistics Start LogMiner to determine activity during a two week period in August EXECUTE dbms_logmnr.start_logmnr( StartTime => `07-Aug-99’, EndTime => `15-Aug-99',DictFileName => `/usr/local/dict.ora'); SELECT seg_owner, seg_name, COUNT(*) AS Hits FROM v$logmnr_contents WHERE seg_name NOT LIKE `%$' GROUP BY seg_owner, seg_name; SEG_OWNER SEG_NAME Hits --------- -------- ---- FINANCE ACCNT 906 SCOTT EMP 54 FINANCE ORD 276 SCOTT DEPT 39 Query v$logmnr_contents to determine activity during the period

24 Carl Dudley - Staffordshire University24 The Logminer Views v$logmnr_dictionary – Information on dictionary file in use v$logmnr_logs – Information on log files under analysis – Log sequence numbers for each log -- If the log files do not have consecutive sequence numbers, an entry is generated signifying a ‘gap’ – High and low SCNs, high and low times of all currently registered logs v$logmnr_parameters – Current LogMiner session parameters – High and low SCNs, high and low times, info, status v$logmnr_contents – Results of analysis – Contains masses of information ( > 50 columns! )

25 Carl Dudley - Staffordshire University25 Columns in v$logmnr_contents operation – Shows type of SQL operation -- INSERT, UPDATE, DELETE, COMMIT, BEGIN_TRANSACTION -- All other operations are reported as UNSUPPORTED or INTERNAL_OPERATION seg_type_name – Only tables are supported in the first release rollback – ‘1’ represents a rollback operation otherwise ‘0’ xidusn, xidslot, xidsqn – Combination of these three columns identifies a transaction row_id – Only The ROWID of the affected row

26 Carl Dudley - Staffordshire University26 Tracking Changes made by a Transaction Typical scenario - you have found an anomaly in a column – Use the column mapping facility to find operations which have affected that column – Use a query on the relevant placeholder column in v$logmnr_contents to find the values of xidusn, xidslt, and xidsqn for the offending operation – Search v$logmnr_contents for operations containing matching transaction identifiers -- Will show operations of any firing database triggers -- This will enable you to gather the necessary undo to roll the entire transaction back

27 Carl Dudley - Staffordshire University27 Tracking Changes to Specific Columns (1) LogMiner can be used to monitor changes on specific columns – Need to create a file called logmnr.opt – MUST be held in the same directory as the dictionary file LogMiner must be started with the ‘ Options ’ parameter set to ‘dbms_logmnr.USE_COLMAP’ EXECUTE dbms_logmnr.start_logmnr(,DictFileName => `/usr/local/dict.ora’,Options => dbms_logmnr.USE_COLMAP’);

28 Carl Dudley - Staffordshire University28 Tracking Changes to Specific Columns (2) logmnr.opt has a very rigid format – Each entry has to be syntactically and lexically correct colmap = SCOTT EMP (1, ENAME); Changes to the ename column in fred ’s emp table will be shown in the first set ( PH1... ) of placeholder columns v$logmnr_contents has five sets of placeholder columns showing the name of the column, the original and new values PH1_NAME, PH1_REDO, PH1_UNDO PH2_NAME, PH2_REDO, PH2_UNDO PH3_NAME, PH3_REDO, PH3_UNDO PH4_NAME, PH4_REDO, PH4_UNDO PH5_NAME, PH5_REDO, PH5_UNDO

29 Carl Dudley - Staffordshire University29 Tracking Changes to a Schema Scripts can be written to generate entries for logmnr.opt The following package populates placeholder columns with changes on any table in a given schema –Must be created under SYS CREATE OR REPLACE PACKAGE filesave AS FUNCTION logmnr(directory VARCHAR2,schemaname VARCHAR2,filename VARCHAR2 DEFAULT 'logmnr.opt') RETURN INTEGER; END filesave;

30 Carl Dudley - Staffordshire University30 Schema Changes - Package Body (1) Error 24342 CREATE OR REPLACE PACKAGE BODY filesave AS FUNCTION logmnr( directory VARCHAR2, schemaname VARCHAR2, filename VARCHAR2 DEFAULT 'logmnr.opt') RETURN INTEGER IS CURSOR curtab IS SELECT object_name table_name FROM dba_objects WHERE owner=UPPER(schemaname) AND object_type='TABLE' ORDER BY object_id ASC; t_tab curtab%ROWTYPE; CURSOR curcol (v_schemaname VARCHAR2,v_table_name VARCHAR2) IS SELECT column_name FROM dba_tab_columns WHERE owner=UPPER(v_schemaname) AND table_name=v_table_name AND column_id < 6 ORDER BY column_id ASC; t_col curcol%ROWTYPE; i NUMBER; v_errorcode NUMBER; v_errormsg VARCHAR2(255); v_line VARCHAR2(255); v_sep VARCHAR2(5); v_map VARCHAR2(30) := 'colmap = ' || UPPER(schemaname) ||' '; v_return INTEGER:=0; v_handle UTL_FILE.FILE_TYPE;

31 Carl Dudley - Staffordshire University31 Schema Changes - Package Body (2) Error 24342 BEGIN v_handle := UTL_FILE.FOPEN(Directory,filename,'w'); FOR t_tab IN curtab LOOP v_line := NULL; v_sep := NULL; i:=1; FOR t_col IN curcol(SchemaName,t_tab.table_name) LOOP v_line := v_line||v_sep||TO_CHAR(i)||', '||t_col.column_name; v_sep := ', '; i:= i+1; END LOOP; UTL_FILE.PUT_LINE( v_handle,v_map||t_tab.table_name||' ('||v_line||');'); UTL_FILE.FFLUSH(v_handle); dbms_output.put_line(v_map||t_tab.table_name||' ('||v_line||');'); END LOOP; UTL_FILE.FFLUSH(v_handle); UTL_FILE.FCLOSE(v_handle); RETURN(v_return); EXCEPTION WHEN OTHERS THEN v_errorcode := SQLCODE; v_errormsg := SUBSTR(RTRIM(LTRIM(SQLERRM)),1,200); dbms_output.put_line('error : >'); dbms_output.put_line(v_errormsg); IF UTL_FILE.IS_OPEN(v_handle) then UTL_FILE.FFLUSH(v_handle); UTL_FILE.FCLOSE(v_handle); END IF; RETURN(-1); END logmnr; END filesave;

32 Carl Dudley - Staffordshire University32 Construction of logmnr.opt SET SERVEROUT ON SIZE 20000 DECLARE v integer; BEGIN v := filesave.logmnr (’c:\temp','SCOTT'); END; / colmap = SCOTT DEPT (1, DEPTNO, 2, DNAME, 3, LOC); colmap = SCOTT EMP (1, EMPNO, 2, ENAME, 3, JOB, 4, MGR, 5, HIREDATE); colmap = SCOTT BONUS (1, ENAME, 2, JOB, 3, SAL, 4, COMM); colmap = SCOTT SALGRADE (1, GRADE, 2, LOSAL, 3, HISAL); colmap = SCOTT TEST (1, COL1); colmap = SCOTT TEST2000 (1, COL1); Only the first five columns in each table can be accommodated

33 Carl Dudley - Staffordshire University33 Placeholder Data Changes to the ename column in fred’s emp table will be shown in the first set (PH1...) of placeholder columns –Undo and redo values for any transaction making changes to ename in the emp table can be seen in the ph1_undo and ph1_redo columns –The SQL can also be seen in sql_redo and sql_undo, but the placeholders make searching for the information much easier DELETE FROM emp WHERE ename = 'COX' 12765 INSERT INTO emp VALUES(1111,'REES'.. 12765 > DELETE FROM dept WHERE.. 12765 > UPDATE dept SET dname =..12764 JACKSONWARDUPDATE emp SET ename = 'JACKSON' WHERE..12763 --------- ------------------------------------------------------ PH1_REDOPH1_UNDOSQL_REDOSCN > REES COX JACKSONFORDUPDATE emp SET ename = ‘JACKSON’ WHERE..12763

34 Carl Dudley - Staffordshire University34 Tracking DDL Statements DDL is not directly shown The effect of a DDL statement can be seen by looking for changes made by the sys user to the base dictionary tables – The timestamp column in v$logmnr_contents can be used to find the time at which the DDL was issued – Useful for finding the timing of a DROP TABLE statement

35 Carl Dudley - Staffordshire University35 Table Creation insert into SYS.OBJ$(OBJ#,DATAOBJ#,OWNER#,NAME,NAMESPACE,SUBNAME,TYPE#,CTIME,MTIME,STIME,STATUS,REMOTEOWNER,LINKNAME,FLAGS) values (2812,2812,19,'EMPLOYEES',1,NULL,2,TO_DATE('09-MAR-2000 19:08:12','DD-MON-YYYY HH24:MI:SS'),1,NULL,NULL,0); Note obj# = dataobj# Create a new table CREATE TABLE employees (... – Evidence of table creation can be seen in sql_redo

36 Carl Dudley - Staffordshire University36 Trigger Creation set transaction read write; insert into SYS.TRIGGER$(OBJ#,TYPE#,UPDATE$,INSERT$, DELETE$,BASEOBJECT,REFOLDNAME,REFNEWNAME,DEFINITION, WHENCLAUSE,REFPRTNAME,ACTIONSIZE,ENABLED,PROPERTY, SYS_EVTS,NTTRIGCOL,NTTRIGATT,ACTION#) values (2811,0,0,0,0,0,NULL,NULL,NULL,NULL,NULL,73,NULL, 0,0,0,0, 'BEGIN IF :NEW.SAL > 2*:OLD.COMM THEN :NEW.SAL := :OLD.SAL; END IF; END; Evidence of trigger creation can be seen in sql_redo

37 Carl Dudley - Staffordshire University37 Oracle9i LogMiner - Release 1 Full support for analysis of DDL statements Use of online data dictionary Use of dictionary definitions extracted into the redo stream Automatic detection of dictionary staleness Can now skip corrupted blocks in the redo stream Limit analysis to committed transactions only Analyse ( MINE ) changes to particular columns GUI frontend - Oracle LogMiner Viewer (within OEM) – Easier to learn and use

38 Carl Dudley - Staffordshire University38 Oracle9i LogMiner Dictionary Usage (1) Data dictionary can be extracted into the redo stream with new option EXECUTE dbms_logmnr.start_logmnr (options => dbms_logmnr.DICT_FROM_REDO_LOGS); –No dictionary file to be managed – Dictionary is backed up via the redo logs -- Produces lots of redo (minimum 12Mb) - but faster than flat file – Database must be in archivelog mode – No DDL allowed during extraction, so dictionary is consistent EXECUTE dbms_logmnr_d.build (options => dbms_logmnr_d.STORE_IN_REDO_LOGS); To use in the analysis session, specify – The logmnr... dictionary views are populated -- The objv# columns track version changes

39 Carl Dudley - Staffordshire University39 Oracle9i LogMiner Dictionary Usage (2) The online data dictionary can also be used to analyze the logs – Available when database is open – Specify the following in dbms_logmnr.start_logmnr procedure EXECUTE dbms_logmnr.start_logmnr (options => dbms_logmnr.DICT_FROM_ONLINE_CATALOG); Use this to analyze recent redo logs or when the structure of objects under test has not changed – Absence of mismatches between online dictionary and redo log contents is assumed

40 Carl Dudley - Staffordshire University40 Oracle9i LogMiner Dictionary Usage (3) Tracking the Dictionary Changes – Update internal dictionary to keep track of changes in the redo stream – Can concatenate multiple options in the options clause EXECUTE dbms_logmnr.start_logmnr (options => dbms_logmnr.DICT_FROM_REDO_LOGS + dbms_logmnr.DDL_DICT_TRACKING); Dictionary staleness can be detected – LogMiner knows about different object versions – Relevant when the dictionary is in a flat file or in the redo logs

41 Carl Dudley - Staffordshire University41 Oracle9i LogMiner Support for DDL Now shows DDL (as originally typed) plus details of user and process issuing the statement create table fred.dept(deptno number, dname varchar2(20)); SELECT sql_redo FROM v$logmnr_contents WHERE operation = ‘DDL’ AND seg_owner = ‘FRED’; SELECT sql_redo FROM v$logmnr_contents WHERE operation = ‘DDL’ AND user_name = ‘SYS’; create user fred identified by values ‘C422E820763B4D55’;

42 Carl Dudley - Staffordshire University42 Oracle9i Supplemental Logging Columns which are not actually changed can be logged –Use this to identify rows via primary keys -- Not dependent on ROWID - gives portability -- Invalidates all DML cursors in the cursor cache Database supplemental logging – Columns involved in primary and unique keys can be logged ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS; ALTER DATABASE DROP SUPPLEMENTAL LOG DATA; – To turn off supplemental logging

43 Carl Dudley - Staffordshire University43 Oracle9i Supplemental Logging Example UPDATE emp SET sal = 1500 WHERE deptno = 10; update “SCOTT”.”EMP” set “SAL” = ‘1500’ where “DEPTNO” = 10 and ROWID = `AAACOOAEBAADPCACI' update “SCOTT”.”EMP” set “SAL” = ‘1500’ where “DEPTNO” = 10and “EMPNO” = ‘7782’ and ROWID = `AAACOOAEBAADPCACI' Example update statement sql_redo without supplemental logging sql_redo with supplemental logging

44 Carl Dudley - Staffordshire University44 Table Level Supplemental Logging ALTER TABLE emp ADD SUPPLEMENTAL LOG GROUP emp_log (ename,sal,comm); Specify groups of columns to be logged ALTER TABLE emp ADD SUPPLEMENTAL LOG GROUP emp_log (ename,sal,comm) ALWAYS; For each changed row in a specific table – If ALWAYS is used -- All columns in the group are logged if ANY column is changed – If ALWAYS is not used -- All columns in the group are logged only if at least one column in the group is changed ALTER TABLE emp DROP SUPPLEMENTAL LOG GROUP emp_log; Observe in dba_log_groups, dba_log_group_columns Remove log groups using a DROP statement

45 Carl Dudley - Staffordshire University45 Oracle9i Query by Data Value mine_value – Returns actual value of data being changed – Can mine data from using redo_value and undo_value in v$logmnr_contents Returns the new value of sal from each redo record – If the column is not changed, it returns NULL Could be used to find how many salaries were updated to > $50K – Which employees have had the new salaries – Who had more than a 10% increase SELECT dbms_logmnr.mine_value(redo_value,’FRED.EMP.SAL’) FROM v$logmnr_contents; SELECT sql_redo FROM v$logmnr_contents WHERE dbms_logmnr.mine_value(redo_value,’FRED.EMP.SAL’) > dbms_logmnr.mine_value(undo_value,’FRED.EMP.SAL’ * 1.1 AND operation = ‘UPDATE’;

46 Carl Dudley - Staffordshire University46 Oracle9i Query by Data Value (continued) column_present – Distinguishes between different types of NULLs in redo_value and undo_value by returning : 0 if the column has not been changed 1 if the column is being set to null SELECT dbms_logmnr.mine_value(redo_value,’FRED.EMP.SAL’),sql_redo FROM v$logmnr_contents WHERE dbms_logmnr.mine_value(redo_value,’FRED.EMP.SAL’) IS NOT NULL OR (dbms_logmnr.mine_value(redo_value,’FRED.EMP.SAL’) IS NULL AND dbms_logmnr.column_present(redo_value,’FRED.EMP.SAL’) = 1); No need for placeholder columns any more

47 Carl Dudley - Staffordshire University47 Oracle9i LogMiner - New Features Display information from committed transactions only – Rows in v$logmnr_contents are grouped by transaction identifier SCN CSCN SQL_REDO ---- ---- ------------------------------------------------------------- 2012 2017 insert into “FRED”.”DEPT” values(‘50’,’ARTS’,’PARIS’); 2016 2017 delete from “FRED”.”DEPT” where ROWID = ‘`AAACOOAEBAADPCACI’; Continue past corruptions in the redo log – The info column in v$logmnr_contents shows ‘Log file corruption encountered’ EXECUTE dbms_logmnr.start_logmnr(options =>... + dbms_lognmnr.SKIP_CORRUPTION); EXECUTE dbms_logmnr.start_logmnr(options =>... + dbms_lognmnr.COMMITTED_DATA_ONLY); – Populates the cscn column and can show rolled back transactions

48 Carl Dudley - Staffordshire University48 Oracle9i LogMiner Viewer

49 Carl Dudley - Staffordshire University49 Oracle9i LogMiner Viewer

50 Carl Dudley - Staffordshire University50 Oracle9i Release 2 LogMiner – Logical Standby Logical Hot Standby – LogMiner is used to generate (imitate) SQL statements to keep the standby in step -- Automatic gathering of sections of redo logs used in logical standbys – Allows the standby to be open for normal use – Can accommodate extra objects such as materialized views and indexes

51 Carl Dudley - Staffordshire University51 Oracle9i Release 2 LogMiner dbms_logmnr_session Allows persistent sessions (LOGMNR_MAX_PERSISTENT_SESSIONS) – LogMiner core architecture has been redesigned Support of long-running applications based on the redo streams –Allows the saving of the mining context and the resumption of mining at some later time Can be used to tune the performance of mining – Employ multiple processes – Memory usage (how much of the SGA) – Disk usage (fraction of tablespace staged for large transactions) – Log read frequency

52 Carl Dudley - Staffordshire University52 Oracle9i Release 2 LogMiner Persistent Sessions Can be programmed to run continuously and wait for new redo logs – Set wait_for_log flag in dbms_logmnr_session.create_session The Remote File Server can be set up to automatically add archived log files to a persistent LogMiner session – Set auto_add_archived flag in dbms_logmnr_session.create_session Continuous analysis requires listener.ora, init.ora and tnsnames.ora files to be configured – Similar to the requirements for hot standby

53 Carl Dudley - Staffordshire University53 Oracle9i LogMiner (continued) LogMiner now supports Index-Organized Tables (NO)Clustered tables Chained rows LOBs and LONGs (NO) Direct loadsScalar object types DDL statements(triggers?)Parallel DML Still no support for Objects (again!)…nested tables, VARRAY s, REF s

54 Carl Dudley - Staffordshire University54 Oracle LogMiner Carl Dudley Staffordshire University, UK EOUG SIG Director UKOUG SIG Director cdudley@staffs.ac.uk


Download ppt "Carl Dudley - Staffordshire University1 Oracle LogMiner Carl Dudley Staffordshire University, UK EOUG SIG Director UKOUG SIG Director"

Similar presentations


Ads by Google