Presentation is loading. Please wait.

Presentation is loading. Please wait.

Managing Multiple Databases Without Multiple DBAs

Similar presentations


Presentation on theme: "Managing Multiple Databases Without Multiple DBAs"— Presentation transcript:

1 Managing Multiple Databases Without Multiple DBAs
Brian Hitchcock OCP DBA 8, 8i, 9i Global Sales IT Sun Microsystems NoCOUG

2 What Was Needed Monitor status of development databases
Some ‘production’ dbs Regular backups Support multiple dbs Easy to add, remove dbs, DBAs Platform independent Database vendor independent Free (i.e. no money to spend)

3 Why Was This Needed? Production database support
Data centers Development database support None Some ‘production’ dbs as well Need to monitor, backup, support all dbs not in data centers Some dbs outside company firewalls Standard monitoring tools don’t work

4 Requirements Support multiple databases Platform independent
Easy to add/drop a db, DBA Platform independent Doesn’t require any tools or products from any specific vendor Database vendor independent Same process works for Oracle, Sybase, MySQL Free

5 What Was Done Solution? cron jobs Scripts to
execute scripts Scripts to monitor backup gather performance data purge performance data Scripts output sent in

6 Requirements Satisfied
Support multiple databases Add new database machine cron (UNIX) Add new database install cron scripts Drop database remove cron scripts

7 Requirements Satisfied
Platform independent works everywhere cron (UNIX) viewer works on multiple platforms DBAs can be local remote at home anywhere Works in secure environments

8 Requirements Satisfied
Database vendor independent , cron scripts Can use commands for any database Output is simple text Easy to modify to vendor specifics Outputs provide written record backed up automatically

9 Free cron Basic system utilities provided with the OS of a system No need to run any additional apps on the database machine Oracle OEM needs ‘agent’ for some tasks

10 Secure Environments db* scripts Modify scripts as needed
Use OS utilities may not be available ‘oracle’ may not be able to execute Modify scripts as needed Use OS utilities that are available Execute scripts as ‘root’

11 cron scripts dbdoc dbexport dbsttspck dbsttspck_rm
Document database - weekly dbexport Full export - weekly dbsttspck Execute STATSPACK snapshot - 15 minutes dbsttspck_rm Purge old snapshot data - weekly

12 What’s Not to Like? Not cool Won’t impress your DBA friends
No multi-color graphs No web services Won’t alert you to a performance metric exceeding a preset threshold

13 Monitoring Multiple Databases
Review s Compare with spreadsheet of dbs being monitored Resolve issues What happened this weekend? Sort by date, look at Sat/Sun Did all the cron jobs run?

14 For each database Sort by subject How often are backups happening?
What was the value of shared_pool_size last week? How big are the export files? how fast is the database growing? Errors in the alert log?

15 Spreadsheet Databases supported Hostname Domain Database name
OS version Database vendor, version cron scripts installed Open tasks Contacts

16 dbexport emails By Date

17 dbexport emails By Subject

18 Tracking Export Files *****************************************
Existing dbexport export files... current sysdate Sun Aug 17 15:39:56 PDT 2003 cd ${EXPORTDIR} /db/archive/ISEDB/dbexport_exp total -rw-r--r oracle dba Aug 1 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 2 15:36 DBEXPORT_ISEDB_ _15:30:03.exp.Z -rw-r--r oracle dba Aug 3 15:36 DBEXPORT_ISEDB_ _15:30:03.exp.Z -rw-r--r oracle dba Aug 4 15:36 DBEXPORT_ISEDB_ _15:30:03.exp.Z -rw-r--r oracle dba Aug 5 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 6 15:36 DBEXPORT_ISEDB_ _15:30:01.exp.Z -rw-r--r oracle dba Aug 7 15:36 DBEXPORT_ISEDB_ _15:30:01.exp.Z -rw-r--r oracle dba Aug 8 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 9 15:36 DBEXPORT_ISEDB_ _15:30:01.exp.Z -rw-r--r oracle dba Aug 10 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 11 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 12 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 13 15:36 DBEXPORT_ISEDB_ _15:30:01.exp.Z -rw-r--r oracle dba Aug 14 15:36 DBEXPORT_ISEDB_ _15:30:03.exp.Z -rw-r--r oracle dba Aug 15 15:36 DBEXPORT_ISEDB_ _15:30:03.exp.Z -rw-r--r oracle dba Aug 16 15:36 DBEXPORT_ISEDB_ _15:30:01.exp.Z -rw-r--r oracle dba Aug 17 15:36 DBEXPORT_ISEDB_ _15:30:04.exp.Z

19 Tracking Export Files *****************************************
Remove dbexport export files older than 14 days /db/archive/ISEDB/dbexport_exp total -rw-r--r oracle dba Aug 2 15:36 DBEXPORT_ISEDB_ _15:30:03.exp.Z -rw-r--r oracle dba Aug 3 15:36 DBEXPORT_ISEDB_ _15:30:03.exp.Z -rw-r--r oracle dba Aug 4 15:36 DBEXPORT_ISEDB_ _15:30:03.exp.Z -rw-r--r oracle dba Aug 5 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 6 15:36 DBEXPORT_ISEDB_ _15:30:01.exp.Z -rw-r--r oracle dba Aug 7 15:36 DBEXPORT_ISEDB_ _15:30:01.exp.Z -rw-r--r oracle dba Aug 8 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 9 15:36 DBEXPORT_ISEDB_ _15:30:01.exp.Z -rw-r--r oracle dba Aug 10 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 11 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 12 15:36 DBEXPORT_ISEDB_ _15:30:02.exp.Z -rw-r--r oracle dba Aug 13 15:36 DBEXPORT_ISEDB_ _15:30:01.exp.Z -rw-r--r oracle dba Aug 14 15:36 DBEXPORT_ISEDB_ _15:30:03.exp.Z -rw-r--r oracle dba Aug 15 15:36 DBEXPORT_ISEDB_ _15:30:03.exp.Z -rw-r--r oracle dba Aug 16 15:36 DBEXPORT_ISEDB_ _15:30:01.exp.Z -rw-r--r oracle dba Aug 17 15:36 DBEXPORT_ISEDB_ _15:30:04.exp.Z End of dbexport

20 Checking Email Output Did the cron job run? Is the db on-line?
Did the export complete without errors? Did the older files get purged?

21 dbdoc -- Checking Email
This is output from TAO DBA script dbdoc ***************************************** date Mon Aug 18 15:00:04 PDT 2003 Document shell script environment for dbdoc echo ${ORACLE_OWNER} oracle echo ${ORACLE_HOME} /db/oracle/app/oracle/product/8.1.7 echo ${DBDOCLOG} /db/oracle/app/oracle/product/8.1.7/admin/dbdoc_log/DBDOC_ISEDB_ _15:00:04.out echo ${ORACLE_SID} ISEDB

22 dbdoc -- Checking Email
***************************************** lsnrctl status LSNRCTL for Solaris: Version Production on 18-AUG :00:05 (c) Copyright 1998 Oracle Corporation. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=akula)(PORT=1521))) STATUS of the LISTENER Alias LISTENER Version TNSLSNR for Solaris: Version Production Start Date AUG :47:49 Uptime days 5 hr. 12 min. 16 sec Trace Level off Security OFF SNMP OFF Listener Parameter File /export/home/oracle/admin/etc/listener.ora Listener Log File /db/oracle/app/oracle/product/8.1.7/network/log/listener.log Services Summary... ISEDB has 2 service handler(s) The command completed successfully

23 dbdoc -- Checking Email
SVRMGR> select * from v$database; DBID NAME CREATED RESETLOGS_ RESETLOGS PRIOR_RESE PRIOR_RES LOG_MODE CHECKPOINT ARCHIVE_CH CONTROL CONTROLFI CONTROLFIL CONTROLFIL CONTROLFI OPEN_RESETL VERSION_T OPEN_MODE ISEDB DEC DEC NOARCHIVELOG CURRENT 12-DEC AUG-03 NOT ALLOWED 12-DEC-01 READ WRITE 1 row selected. SVRMGR> exit ***************************************** Document recent contents of alert log SVRMGR> select value from v$parameter where name='background_dump_dest'; VALUE /export/home/oracle/admin/ISEDB/bdump --> background_dump_dest path is --> alert log file name is alert_ISEDB.log --> tail -500 of alert file is ARC0: media recovery disabled Sun Aug 10 16:11: Thread 1 advanced to log sequence 2477 Current log# 1 seq# 2477 mem# 0: /db/redo/ISEDB/log/redo1 Current log# 1 seq# 2477 mem# 1: /db/redo/ISEDB/loga/redo1a

24 dbexport -- Email output
***************************************** start time of export Mon Aug 18 15:30:03 PDT 2003 Connected to: Oracle8i Enterprise Edition Release Production With the Partitioning option JServer Release Production Export done in UTF8 character set and UTF8 NCHAR character set ... About to export the entire database ... . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options . exporting statistics Export terminated successfully without warnings. end time of export Mon Aug 18 15:36:19 PDT 2003

25 dbsttspck_rm -- Check Email
This is output from TAO DBA script dbsttspck_rm ***************************************** date Sun Aug 17 16:00:05 PDT 2003 Values for this run of dbsttspck_rm value of MIN_SNAPID 3081 value of MAX_SNAPID 6634 value of TRUNC_SNAPID 3755 value of RETENTION_TIME_DAYS 30 Document shell script environment for dbsttspck_rm

26 dbsttspck_rm -- Check Email
***************************************** start time of export Sun Aug 17 16:00:06 PDT 2003 Connected to: Oracle8i Enterprise Edition Release Production With the Partitioning option JServer Release Production Export done in UTF8 character set and UTF8 NCHAR character set . exporting pre-schema procedural objects and actions . exporting foreign function library names for user PERFSTAT . exporting object type definitions for user PERFSTAT About to export PERFSTAT's objects ... . exporting database links . exporting sequence numbers . exporting cluster definitions . about to export PERFSTAT's tables via Conventional Path ... . . exporting table STATS$BG_EVENT_SUMMARY rows exported . . exporting table STATS$BUFFER_POOL_STATISTICS rows exported . . exporting table STATS$DATABASE_INSTANCE rows exported ... . exporting dimensions . exporting post-schema procedural objects and actions . exporting statistics Export terminated successfully without warnings. end time of export Sun Aug 17 16:05:30 PDT 2003

27 cron Scripts -- Details
dbdoc Document database dbexport Full export dbsttspck Execute STATSPACK snapshot dbsttspck_rm Purge old snapshot data

28 dbdoc Uses OS commands svrmgrl to access database
Document machine, environment svrmgrl to access database sqlplus for 9i Password not in script Use OS authentication

29 dbdoc #!/bin/sh # # This is dbdoc -- script version of extended checklist # to document an Oracle database and the host machine it # is running on # this version also removes dbdoc log file over XX days old # added more SQL for partitioned tables, indexes # added SQL to compute %used for tablespaces # improved SQL for partitioned tables, indexes # setup environment parameters for the script ORACLE_SID= export ORACLE_SID ORACLE_OWNER=oracle export $ORACLE_OWNER ORACLE_HOME= export ORACLE_HOME NOTIFY_LIST='< address>' export NOTIFY_LIST LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/usr/openwin/lib export LD_LIBRARY_PATH PATH=/bin:/usr/bin:/usr/ccs/bin:/usr/sbin:${ORACLE_HOME}/bin:. export PATH NLS_LANG=AMERICAN_AMERICA.US7ASCII, .WE8ISO8859P1, .UTF8 export NLS_LANG LOGDIR=$ORACLE_HOME/admin/dbdoc_log export LOGDIR RETENTION_TIME_DAYS=<number of days>

30 dbdoc export RETENTION_TIME_DAYS
DBDOCLOG=${LOGDIR}/DBDOC_${ORACLE_SID}_`date +\%m\%d\%Y_\%T`.out export DBDOCLOG SVRMGRL_TMP=${LOGDIR}/SVRMGRL_TMP_${ORACLE_SID}_`date +\%m\%d\%Y_\%T`.out export SVRMGRL_TMP CSHRC_FILE=~/.cshrc_<SID> export CSHRC_FILE INIT_FILE_PATH=$ORACLE_HOME/admin/pfile export INIT_FILE_PATH CRDB_FILE_PATH=$ORACLE_HOME/admin/sql export CRDB_FILE_PATH LISTENER_PATH=$ORACLE_HOME/admin/network export LISTENER_PATH TNSNAMES_PATH=$ORACLE_HOME/admin/network export TNSNAMES_PATH # # DBDOCLOG2 is setup to catch the useless output of svrmgrl # and put we put this file in /dev/null so it really won't # appear on disk at all. Without this file, all the text of # every svrmgrl connect will appear on screen when this script # is run from the command line. DBDOCLOG2=/dev/null export DBDOCLOG2 echo 'This is output from TAO DBA script dbdoc' > ${DBDOCLOG}

31 dbdoc echo ' ' >> ${DBDOCLOG}
echo 'date' >> ${DBDOCLOG} date >> ${DBDOCLOG} # # document shell script environment for dbdoc echo 'Document shell script environment for dbdoc' >> ${DBDOCLOG} echo 'echo ${ORACLE_OWNER}' >> ${DBDOCLOG} echo ${ORACLE_OWNER} >> ${DBDOCLOG} echo ${ORACLE_HOME} >> ${DBDOCLOG} echo ${DBDOCLOG} >> ${DBDOCLOG} echo ${ORACLE_SID} >> ${DBDOCLOG} cat ${CSHRC_FILE} >> ${DBDOCLOG} env >> ${DBDOCLOG} # document the server machine

32 dbdoc echo ' ' >> ${DBDOCLOG}
echo 'Document the server machine' >> ${DBDOCLOG} hostname >> ${DBDOCLOG} uptime >> ${DBDOCLOG} /usr/bin/getconf CLK_TCK >> ${DBDOCLOG} /usr/sbin/prtconf >> ${DBDOCLOG} uname -a >> ${DBDOCLOG} df -k >> ${DBDOCLOG} ls -l / >> ${DBDOCLOG} cat /etc/vfstab >> ${DBDOCLOG} ls -l /var/opt >> ${DBDOCLOG} ls -l /opt >> ${DBDOCLOG} ls -Ll /dev/rdsk | grep oracle >> ${DBDOCLOG} ls -Ll /dev/rdsk | grep sybase >> ${DBDOCLOG} tail -40 /etc/system >> ${DBDOCLOG} (cat the passwd file in the /etc directory) >> ${DBDOCLOG} cat /etc/group >> ${DBDOCLOG} # # document the UNIX user oracle echo 'Document the UNIX user oracle' >> ${DBDOCLOG}

33 dbdoc echo ' ' >> ${DBDOCLOG}
echo 'id' >> ${DBDOCLOG} id >> ${DBDOCLOG} pwd >> ${DBDOCLOG} echo ${CSHRC_FILE} >> ${DBDOCLOG} cat ${CSHRC_FILE} >> ${DBDOCLOG} # # document the Oracle database instance echo 'Document the Oracle database instance' >> ${DBDOCLOG} echo 'ps -ef | grep oracle' >> ${DBDOCLOG} ps -ef | grep oracle >> ${DBDOCLOG} crontab -l oracle >> ${DBDOCLOG} lsnrctl status >> ${DBDOCLOG}

34 dbdoc echo ' ' >> ${DBDOCLOG}
svrmgrl << EOF >> ${DBDOCLOG2} set termout off set echo on connect internal spool '$SVRMGRL_TMP' select * from v\$database; exit EOF cat $SVRMGRL_TMP >> ${DBDOCLOG} select * from v\$instance; show sga select * from v\$version; select * from v\$option; select * from v\$datafile; select * from dba_data_files; select * from dba_tablespaces; select * from v\$logfile; select * from v\$log; archive log list; select count(first_change#) "logswitches per hour" , to_char(first_time, 'yyyy.mm.dd HH24') "on hour" from v\$loghist group by to_char(first_time, 'yyyy.mm.dd HH24'); select * from dba_rollback_segs; select segment_name, tablespace_name, initial_extent, next_extent, min_extents, max_extents, pct_increase, status from dba_rollback_segs; select usn, optsize, extents from v\$rollstat; select * from v\$controlfile; select tablespace_name, sum(bytes)/1024/1024 Mb from dba_data_files select tablespace_name, sum(bytes)/1024/1024 Mb from dba_free_space group by tablespace_name order by tablespace_name;

35 dbdoc select tablespace_name, sum(bytes)/1024/1024 Mb, count (Blocks) Tb, min(Bytes)/1024/1024 MinMb, max(Bytes)/1024/1024 MaxMb from dba_free_space group by tablespace_name; select sum(bytes)/1024/1024 Db_Total_Megabytes from dba_data_files; select sum(bytes)/1024/1024 Db_Total_Free_Megabytes from dba_free_space; select a.t1 "Tablespace", trunc(b.Mb) "Total Mb", trunc(a.Mb) "Used Mb", trunc(b.Mb-a.Mb) "Free Mb", trunc((a.Mb/b.Mb)*100) "Used%" from (select tablespace_name t1, sum(bytes)/1024/1024 Mb from sys.dba_extents group by tablespace_name) a, (select tablespace_name t2, sum(bytes)/1024/1024 Mb from sys.dba_data_files group by tablespace_name) b where a.t1 = b.t2; select tablespace_name,initial_extent,next_extent,pct_increase,min_extents, max_extents from dba_tablespaces; select * from v\$tempfile; select * from dba_temp_files; select count(*) from dba_part_tables; select table_name, partitioning_type, partition_count from dba_part_tables; select count(*) from dba_part_indexes; select index_name, partitioning_type, partition_count from dba_part_indexes; select name, object_type, count(*) from dba_part_key_columns group by name, object_type; select * from dba_part_key_columns; select table_name, index_name from dba_part_indexes where owner !='SYS' order by table_name, index_name; set long 100 select table_name, tablespace_name, partition_name, partition_position, high_value from dba_tab_partitions order by table_name, partition_position; select index_name, tablespace_name, partition_name, partition_position, high_value from dba_ind_partitions order by index_name, partition_position;

36 dbdoc select index_name, tablespace_name, status from dba_indexes where status like 'UNUSABLE%'; select index_name, partition_name, partition_position, tablespace_name, status from dba_ind_partitions where status like 'UNUSABLE%'; select tablespace_name, bytes/1024/1024 Mb, substr(file_name,1,40) from dba_temp_files; select tablespace_name, total_blocks*8192/1024/1024 TOTALMb, free_blocks*8192/1024/1024 FREEMb, used_blocks*8192/1024/1024 USEDMb from v\$sort_segment; select owner, count(*) from dba_objects group by owner; select owner, count(*), object_type from dba_objects group by object_type, owner order by owner; select owner, count(*) from dba_tables where tablespace_name='SYSTEM' group by owner; select owner, table_name from dba_tables where owner !='SYS' and tablespace_name='SYSTEM'; select count(*) from dba_objects where status='INVALID'; select owner, object_type, object_name, status from dba_objects where status='INVALID' order by owner, object_type, object_name; select * from dba_users order by username; select username,default_tablespace, temporary_tablespace from dba_users order by username; select owner, sum(bytes)/1024/1024 Megabytes from dba_segments group by owner; select * from dba_ts_quotas; select * from dba_roles; select * from dba_role_privs where grantee not in ('DBA','DBSNMP','EXP_FULL_DATABASE','IMP_FULL_DATABASE', 'SYS','SYSTEM'); select * from dba_sys_privs where grantee not in ('CONNECT','DBA','DBSNMP','EXP_FULL_DATABASE','IMP_FULL_DATABASE', 'RECOVERY_CATALOG_OWNER','RESOURCE','SNMPAGENT','SYS','SYSTEM'); select name,value from v\$parameter order by name; select name, value from v\$parameter where isdefault = 'FALSE'; select * from nls_database_parameters; select * from dba_db_links; select * from dba_synonyms where table_owner !='SYS' and table_owner !='SYSTEM'; date >> ${DBDOCLOG}

37 dbdoc alter database backup controlfile to trace;
echo ' ' >> ${DBDOCLOG} echo '*****************************************' >> ${DBDOCLOG} echo 'Document the sql backup control file' >> ${DBDOCLOG} select value from v\$parameter where name='user_dump_dest'; UDUMP_PATH=`tail -3 $SVRMGRL_TMP | head -1 | awk '{ print $1 }'` UDUMP_FILE=`ls -1rt ${UDUMP_PATH} | tail -1` ls -l ${UDUMP_PATH}/${UDUMP_FILE} >> ${DBDOCLOG} cat ${UDUMP_PATH}/${UDUMP_FILE} >> ${DBDOCLOG} echo 'Document the ${INIT_FILE_PATH}/init${ORACLE_SID}.ora file' >> ${DBDOCLOG} echo ${INIT_FILE_PATH} >> ${DBDOCLOG} INIT_FILE=init${ORACLE_SID}.ora echo ${INIT_FILE} >> ${DBDOCLOG} cat ${INIT_FILE_PATH}/${INIT_FILE} >> ${DBDOCLOG} echo 'Document the crdb${ORACLE_SID} file' >> ${DBDOCLOG} echo ${CRDB_FILE_PATH} >> ${DBDOCLOG} CRDB_FILE=crdb${ORACLE_SID}.sql echo ${CRDB_FILE} >> ${DBDOCLOG} echo '--> crdb file contents' >> ${DBDOCLOG} cat ${CRDB_FILE_PATH}/${CRDB_FILE} >> ${DBDOCLOG}

38 dbdoc echo ' ' >> ${DBDOCLOG}
echo 'Document the listener.ora file' >> ${DBDOCLOG} echo ${LISTENER_PATH} >> ${DBDOCLOG} echo '--> file name is listener.ora' >> ${DBDOCLOG} cat ${LISTENER_PATH}/listener.ora >> ${DBDOCLOG} echo 'Document the tnsnames.ora file' >> ${DBDOCLOG} echo ${TNSNAMES_PATH} >> ${DBDOCLOG} echo '--> file name is tnsnames.ora' >> ${DBDOCLOG} echo '--> tnsnames file contents' >> ${DBDOCLOG} cat ${TNSNAMES_PATH}/tnsnames.ora >> ${DBDOCLOG} echo 'Document the /var/opt/oracle/oratab file' >> ${DBDOCLOG} echo /var/opt/oracle >> ${DBDOCLOG} echo '--> file name is oratab' >> ${DBDOCLOG} cat /var/opt/oracle/oratab >> ${DBDOCLOG} echo 'Document recent contents of alert log' >> ${DBDOCLOG}

39 dbdoc select value from v\$parameter where name='background_dump_dest'; ALERT_PATH=`tail -3 $SVRMGRL_TMP | head -1 | awk '{ print $1 }'` echo ${ALERT_PATH} >> ${DBDOCLOG} ALERT_FILE=alert_${ORACLE_SID}.log echo ${ALERT_FILE} >> ${DBDOCLOG} tail -500 ${ALERT_PATH}/${ALERT_FILE} >> ${DBDOCLOG} rm $SVRMGRL_TMP echo ' ' >> ${DBDOCLOG} echo '*****************************************' >> ${DBDOCLOG} echo 'Existing dbdoc log files...' >> ${DBDOCLOG} echo 'current sysdate' >> ${DBDOCLOG} date >> ${DBDOCLOG} echo 'cd $ORACLE_HOME/admin/dbdoc_log' >> ${DBDOCLOG} cd ${LOGDIR} pwd >> ${DBDOCLOG} ls -l >> ${DBDOCLOG} echo 'Remove dbdoc log files older than '${RETENTION_TIME_DAYS}' days' >> ${DBDOCLOG} /usr/bin/find . -name "DBDOC_${ORACLE_SID}_*.out" -ctime +${RETENTION_TIME_DAYS} -exec /bin/rm {} \;

40 dbdoc echo ' ' >> ${DBDOCLOG}
echo 'End of dbdoc' >> ${DBDOCLOG} MSG='DBDOC for '`hostname`':'${ORACLE_SID}' '`date +\%m\%d\%Y_\%T` /usr/bin/mailx -s "${MSG}" ${NOTIFY_LIST} < ${DBDOCLOG} # #end of dbdoc

41 dbexport Full export of a single database Purge older export files
Compress export file If needed, export directly to compressed file through pipe Purge older export files Why not cold/hot backups? Development machines don’t have disk space for a single copy of each database Exports much smaller than full database

42 dbexport #!/bin/sh # # This is dbexport -- script to create a full export of # a database, the script exports normally, and then compresses the # export file # script sets all needed environment parameters # script removes export files older than XX days ORACLE_SID= export ORACLE_SID ORACLE_OWNER=oracle export $ORACLE_OWNER ORACLE_HOME= export ORACLE_HOME NOTIFY_LIST='< address>' export NOTIFY_LIST LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/usr/openwin/lib export LD_LIBRARY_PATH PATH=/bin:/usr/bin:/usr/ccs/bin:/usr/sbin:${ORACLE_HOME}/bin:. export PATH NLS_LANG=AMERICAN_AMERICA.US7ASCII, .WE8ISO8859P1, .UTF8 export NLS_LANG CSHRC_FILE=~/.cshrc_<SID> export CSHRC_FILE LOGDIR=$ORACLE_HOME/admin/dbexport_log export LOGDIR RETENTION_TIME_DAYS=<number of days> export RETENTION_TIME_DAYS

43 dbexport # # EXPORT_LOG_TMP isn't needed for this script alone, but, since the # export log file is written to disk while the export is made, if # multiple export jobs are running, and they all use the same disk filename, # one script may remove the export log file for the other script. We use # the filename EXPORT_LOG_TMP so that on disk each script will have its own # export log file to prevent any other scripts removing it before the export # logfile is cat-ed to the overall export script log EXPORT_LOG_TMP=${LOGDIR}/EXPORT_LOG_TMP_${ORACLE_SID}_`date +\%m\%d\%Y_\%T`.tmp export EXPORT_LOG_TMP EXPORT_PIPE=EXPORT_PIPE_${ORACLE_SID}_`date +\%m\%d\%Y_\%T` export EXPORT_PIPE EXPORTDIR=$ORACLE_HOME/admin/dbexport_exp export EXPORTDIR DBEXPORT=${EXPORTDIR}/DBEXPORT_${ORACLE_SID}_`date +\%m\%d\%Y_\%T`.exp export DBEXPORT DBEXPORT_COMPRESSED=${DBEXPORT}.Z export DBEXPORT_COMPRESSED DBEXPORTLOG=${LOGDIR}/DBEXPORT_${ORACLE_SID}_`date +\%m\%d\%Y_\%T`.out export DBEXPORTLOG echo 'This is output from TAO DBA script dbexport' > ${DBEXPORTLOG} echo ' ' >> ${DBEXPORTLOG} echo '*****************************************' >> ${DBEXPORTLOG} echo 'date' >> ${DBEXPORTLOG} date >> ${DBEXPORTLOG}

44 dbexport # # document shell script environment for dbexport
echo ' ' >> ${DBEXPORTLOG} echo '*****************************************' >> ${DBEXPORTLOG} echo 'Document shell script environment for dbexport' >> ${DBEXPORTLOG} echo 'echo ${ORACLE_OWNER}' >> ${DBEXPORTLOG} echo ${ORACLE_OWNER} >> ${DBEXPORTLOG} echo ${ORACLE_HOME} >> ${DBEXPORTLOG} echo ${DBEXPORTLOG} >> ${DBEXPORTLOG} echo ${ORACLE_SID} >> ${DBEXPORTLOG} cat ${CSHRC_FILE} >> ${DBEXPORTLOG} env >> ${DBEXPORTLOG} # document the server machine echo 'Document the server machine' >> ${DBEXPORTLOG}

45 dbexport echo ' ' >> ${DBEXPORTLOG}
echo 'hostname' >> ${DBEXPORTLOG} hostname >> ${DBEXPORTLOG} df -k >> ${DBEXPORTLOG} # # change directory to where the export file will be created cd ${EXPORTDIR} pwd # default is export followed by compress # if needed, export directly to pipe to use less disk space # uncomment the # lines below echo 'make full export...' >> ${DBEXPORTLOG} # echo 'make full export to pipe...' >> ${DBEXPORTLOG} #echo 'mknod /tmp/${EXPORT_PIPE} p' >> ${DBEXPORTLOG} #echo 'compress < /tmp/${EXPORT_PIPE} > '${DBEXPORT_COMPRESSED}' &' >> ${DBEXPORTLOG} #echo '${ORACLE_HOME}/bin/exp / full=y consistent=y direct=y BUFFER= file=/tmp/${EXPORT_PIPE} log=${EXPORT_LOG_TMP}' >> ${DBEXPORTLOG} echo '${ORACLE_HOME}/bin/exp / full=y consistent=y direct=y BUFFER= file='${DBEXPORT}' log=${EXPORT_LOG_TMP}' >> ${DBEXPORTLOG}

46 dbexport echo 'cat ${EXPORT_LOG_TMP}' >> ${DBEXPORTLOG}
echo 'rm ${EXPORT_LOG_TMP}' >> ${DBEXPORTLOG} #echo 'rm /tmp/${EXPORT_PIPE}' >> ${DBEXPORTLOG} echo ' ' >> ${DBEXPORTLOG} echo '*****************************************' >> ${DBEXPORTLOG} echo 'start time of export' >> ${DBEXPORTLOG} date >> ${DBEXPORTLOG} #mknod /tmp/${EXPORT_PIPE} p #compress < /tmp/${EXPORT_PIPE} > ${DBEXPORT_COMPRESSED} & #${ORACLE_HOME}/bin/exp / full=y consistent=y direct=y BUFFER= file=/tmp/${EXPORT_PIPE} log=${EXPORT_LOG_TMP} ${ORACLE_HOME}/bin/exp / full=y consistent=y direct=y BUFFER= file=${DBEXPORT} log=${EXPORT_LOG_TMP} cat ${EXPORT_LOG_TMP} >> ${DBEXPORTLOG} rm ${EXPORT_LOG_TMP} >> ${DBEXPORTLOG} #rm /tmp/${EXPORT_PIPE} >> ${DBEXPORTLOG} pwd >> ${DBEXPORTLOG} df -k . >> ${DBEXPORTLOG} echo ${DBEXPORT} >> ${DBEXPORTLOG} ls -l >> ${DBEXPORTLOG} df -k >> ${DBEXPORTLOG} # # if exporting to pipe, no sense in compressing (again), skip # all commands down to next 'if exporting to pipe' comment

47 dbexport echo 'compress '${DBEXPORT} >> ${DBEXPORTLOG}
date >> ${DBEXPORTLOG} pwd >> ${DBEXPORTLOG} df -k . >> ${DBEXPORTLOG} echo ${DBEXPORT_COMPRESSED} >> ${DBEXPORTLOG} ls -l >> ${DBEXPORTLOG} df -k >> ${DBEXPORTLOG} # # if exporting to pipe -- end of commands to be commented out echo ' ' >> ${DBEXPORTLOG} echo '*****************************************' >> ${DBEXPORTLOG} echo 'Existing dbexport log files...' >> ${DBEXPORTLOG} echo 'current sysdate' >> ${DBEXPORTLOG} echo 'cd ${LOGDIR} ' >> ${DBEXPORTLOG} cd ${LOGDIR} echo 'Remove dbexport log files older than '${RETENTION_TIME_DAYS}' days' >> ${DBEXPORTLOG}

48 dbexport /usr/bin/find . -name "DBEXPORT_${ORACLE_SID}_*.out" -ctime +${RETENTION_TIME_DAYS} -exec /bin/rm {} \; echo ' ' >> ${DBEXPORTLOG} pwd >> ${DBEXPORTLOG} ls -l >> ${DBEXPORTLOG} echo '*****************************************' >> ${DBEXPORTLOG} echo 'Existing dbexport export files...' >> ${DBEXPORTLOG} echo 'current sysdate' >> ${DBEXPORTLOG} date >> ${DBEXPORTLOG} echo 'cd ${EXPORTDIR}' >> ${DBEXPORTLOG} cd ${EXPORTDIR} echo 'Remove dbexport export files older than '${RETENTION_TIME_DAYS}' days' >> ${DBEXPORTLOG} /usr/bin/find . -name "DBEXPORT_${ORACLE_SID}_*.exp*" -ctime +${RETENTION_TIME_DAYS} -exec /bin/rm {} \;

49 dbexport echo ' ' >> ${DBEXPORTLOG}
echo 'End of dbexport' >> ${DBEXPORTLOG} MSG='DBEXPORT for '`hostname`':'${ORACLE_SID}' '`date +\%m\%d\%Y_\%T` /usr/bin/mailx -s "${MSG}" ${NOTIFY_LIST} < ${DBEXPORTLOG} # #end of dbexport

50 dbsttspck Execute statspack.snap Gathers db performance data
Doesn’t process any of the data Doesn’t generate reports Doesn’t purge the older data

51 dbsttspck #!/bin/sh # # This is dbsttspck -- script to execute STATSPACK snapshot # setup environment parameters for the script ORACLE_SID= export ORACLE_SID ORACLE_OWNER=oracle export $ORACLE_OWNER ORACLE_HOME= export ORACLE_HOME LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/usr/openwin/lib export LD_LIBRARY_PATH PATH=/bin:/usr/bin:/usr/ccs/bin:/usr/sbin:${ORACLE_HOME}/bin:. export PATH $ORACLE_HOME/bin/sqlplus perfstat/perfstat << EOF execute statspack.snap EOF #end of dbsttspck

52 dbsttspck_rm Exports existing STATSPACK data
Exports PERFSTAT schema Documents existing snapshots Determines which snapshots are older than retention interval Purges old data Executes sppurge.sql Doesn’t purge SQL text

53 dbsttspck_rm #!/bin/sh #
# This is dbsttspck_rm -- script to remove STATSPACK # table data older than a specified number of days # each time this script runs, it exports the PERFSTAT # user schmea, then determines the snap_id values for # the range of snapshot data to be removed, writes # a file that contains the commands needed to execute # sppurge and then executes that file # script removes export files older than XX days ORACLE_SID= export ORACLE_SID ORACLE_OWNER=oracle export $ORACLE_OWNER ORACLE_HOME= export ORACLE_HOME NOTIFY_LIST='< address>' export NOTIFY_LIST LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/usr/openwin/lib export LD_LIBRARY_PATH PATH=/bin:/usr/bin:/usr/ccs/bin:/usr/sbin:${ORACLE_HOME}/bin:. export PATH NLS_LANG=AMERICAN_AMERICA.US7ASCII, .WE8ISO8859P1, .UTF8 export NLS_LANG CSHRC_FILE=~/.cshrc_<SID> export CSHRC_FILE LOGDIR=$ORACLE_HOME/admin/dbsttspck_rm_log export LOGDIR RETENTION_TIME_DAYS=<number of days> export RETENTION_TIME_DAYS

54 dbsttspck_rm # # EXPORT_LOG_TMP isn't needed for this script alone, but, since the # export log file is written to disk while the export is made, if # multiple export jobs are running, and they all use the same disk filename, # one script may remove the export log file for the other script. We use # the filename EXPORT_LOG_TMP so that on disk each script will have its own # export log file to prevent any other scripts removing it before the export # logfile is cat-ed to the overall export script log EXPORT_LOG_TMP=${LOGDIR}/EXPORT_LOG_TMP_${ORACLE_SID}_`date +\%m\%d\%Y_\%T`.tmp export EXPORT_LOG_TMP EXPORT_PIPE=EXPORT_PIPE_${ORACLE_SID}_`date +\%m\%d\%Y_\%T` export EXPORT_PIPE EXPORTDIR=$ORACLE_HOME/admin/dbsttspck_rm_exp export EXPORTDIR SQLPLUS_TMP=${LOGDIR}/SQLPLUS_TMP_${ORACLE_SID}_`date +\%m\%d\%Y_\%T`.tmp export SQLPLUS_TMP # setup .sql file that will compute min, max, trunc_snap_id echo 'set pagesize 0' > ${SQLPLUS_TMP} echo 'set feedback off' >> ${SQLPLUS_TMP} echo 'set heading off' >> ${SQLPLUS_TMP} echo 'select trim(min(snap_id)) from stats$snapshot;' >> ${SQLPLUS_TMP} echo 'exit' >> ${SQLPLUS_TMP} MIN_SNAPID=`sqlplus -s

55 dbsttspck_rm echo 'set pagesize 0' > ${SQLPLUS_TMP}
echo 'set feedback off' >> ${SQLPLUS_TMP} echo 'set heading off' >> ${SQLPLUS_TMP} echo 'select trim(max(snap_id)) from stats$snapshot;' >> ${SQLPLUS_TMP} echo 'exit' >> ${SQLPLUS_TMP} MAX_SNAPID=`sqlplus -s echo 'select trim(max(snap_id)) from stats$snapshot where snap_time < sysdate-'${RETENTION_TIME_DAYS}';' >> ${SQLPLUS_TMP} TRUNC_SNAPID=`sqlplus -s rm ${SQLPLUS_TMP} DBSTTSPCK_RM=${EXPORTDIR}/DBSTTSPCK_RM_${ORACLE_SID}_${MIN_SNAPID}_${MAX_SNAPID}_`date +\%m\%d\%Y_\%T`.exp export DBSTTSPCK_RM DBSTTSPCK_RM_COMPRESSED=${DBSTTSPCK_RM}.Z export DBSTTSPCK_RM_COMPRESSED DBSTTSPCK_RM_LOG=${LOGDIR}/DBSTTSPCK_RM_${ORACLE_SID}_`date +\%m\%d\%Y_\%T`.out export DBEXPORTLOG echo 'This is output from TAO DBA script dbsttspck_rm' > ${DBSTTSPCK_RM_LOG} echo ' ' >> ${DBSTTSPCK_RM_LOG} echo '*****************************************' >> ${DBSTTSPCK_RM_LOG} echo 'date' >> ${DBSTTSPCK_RM_LOG} date >> ${DBSTTSPCK_RM_LOG}

56 dbsttspck_rm echo ' ' >> ${DBSTTSPCK_RM_LOG}
echo 'Values for this run of dbsttspck_rm' >> ${DBSTTSPCK_RM_LOG} echo 'value of MIN_SNAPID' >> ${DBSTTSPCK_RM_LOG} echo ${MIN_SNAPID} >> ${DBSTTSPCK_RM_LOG} echo 'value of MAX_SNAPID' >> ${DBSTTSPCK_RM_LOG} echo ${MAX_SNAPID} >> ${DBSTTSPCK_RM_LOG} echo 'value of TRUNC_SNAPID' >> ${DBSTTSPCK_RM_LOG} echo ${TRUNC_SNAPID} >> ${DBSTTSPCK_RM_LOG} echo 'value of RETENTION_TIME_DAYS' >> ${DBSTTSPCK_RM_LOG} echo ${RETENTION_TIME_DAYS} >> ${DBSTTSPCK_RM_LOG} # # document shell script environment for dbsttspck_rm echo 'Document shell script environment for dbsttspck_rm' >> ${DBSTTSPCK_RM_LOG} echo 'echo ${ORACLE_OWNER}' >> ${DBSTTSPCK_RM_LOG} echo ${ORACLE_OWNER} >> ${DBSTTSPCK_RM_LOG}

57 dbsttspck_rm echo ${ORACLE_HOME} >> ${DBSTTSPCK_RM_LOG}
echo ${DBSTTSPCK_RM_LOG} >> ${DBSTTSPCK_RM_LOG} echo ${ORACLE_SID} >> ${DBSTTSPCK_RM_LOG} cat ${CSHRC_FILE} >> ${DBSTTSPCK_RM_LOG} env >> ${DBSTTSPCK_RM_LOG} # # document the server machine echo ' ' >> ${DBSTTSPCK_RM_LOG} echo '*****************************************' >> ${DBSTTSPCK_RM_LOG} echo 'Document the server machine' >> ${DBSTTSPCK_RM_LOG} echo 'hostname' >> ${DBSTTSPCK_RM_LOG} hostname >> ${DBSTTSPCK_RM_LOG} df -k >> ${DBSTTSPCK_RM_LOG} # change directory to where the export file will be created cd ${EXPORTDIR} # default is export followed by compress # if needed, export directly to pipe to use less disk space # uncomment the # lines below

58 dbsttspck_rm echo ' ' >> ${DBSTTSPCK_RM_LOG}
echo 'export PERFSTAT schema...' >> ${DBSTTSPCK_RM_LOG} # echo 'export PERFSTAT schema to pipe...' >> ${DBSTTSPCK_RM_LOG} #echo 'mknod /tmp/${EXPORT_PIPE} p' >> ${DBSTTSPCK_RM_LOG} #echo 'compress < /tmp/${EXPORT_PIPE} > '${DBSTTSPCK_RM_COMPRESSED}' &' >> ${DBSTTSPCK_RM_LOG} #echo '${ORACLE_HOME}/bin/exp perfstat/perfstat owner=PERFSTAT file=/tmp/${EXPORT_PIPE} log=${EXPORT_LOG_TMP}' >> ${DBSTTSPCK_RM_LOG} echo '${ORACLE_HOME}/bin/exp perfstat/perfstat owner=PERFSTAT file='${DBSTTSPCK_RM}' log=${EXPORT_LOG_TMP}' >> ${DBSTTSPCK_RM_LOG} echo 'cat ${EXPORT_LOG_TMP}' >> ${DBSTTSPCK_RM_LOG} echo 'rm ${EXPORT_LOG_TMP}' >> ${DBSTTSPCK_RM_LOG} #echo 'rm /tmp/${EXPORT_PIPE}' >> ${DBSTTSPCK_RM_LOG} echo 'start time of export' >> ${DBSTTSPCK_RM_LOG} date >> ${DBSTTSPCK_RM_LOG} #mknod /tmp/${EXPORT_PIPE} p #compress < /tmp/${EXPORT_PIPE} > ${DBSTTSPCK_RM_COMPRESSED} & #${ORACLE_HOME}/bin/exp perfstat/perfstat owner=PERFSTAT file=/tmp/${EXPORT_PIPE} log=${EXPORT_LOG_TMP} ${ORACLE_HOME}/bin/exp perfstat/perfstat owner=PERFSTAT file=${DBSTTSPCK_RM} log=${EXPORT_LOG_TMP} cat ${EXPORT_LOG_TMP} >> ${DBSTTSPCK_RM_LOG} rm ${EXPORT_LOG_TMP} >> ${DBSTTSPCK_RM_LOG} #rm /tmp/${EXPORT_PIPE} >> ${DBSTTSPCK_RM_LOG}

59 dbsttspck_rm echo ' ' >> ${DBSTTSPCK_RM_LOG}
echo 'end time of export' >> ${DBSTTSPCK_RM_LOG} date >> ${DBSTTSPCK_RM_LOG} pwd >> ${DBSTTSPCK_RM_LOG} df -k . >> ${DBSTTSPCK_RM_LOG} echo ${DBSTTSPCK_RM} >> ${DBSTTSPCK_RM_LOG} ls -l >> ${DBSTTSPCK_RM_LOG} df -k >> ${DBSTTSPCK_RM_LOG} # # if exporting to pipe, no sense in compressing (again), skip # all commands down to next 'if exporting to pipe' comment # this means you need to comment out all the lines down to the # 'if exporting to pipe' comment echo 'compress export file...' >> ${DBSTTSPCK_RM_LOG} echo 'compress '${DBSTTSPCK_RM} >> ${DBSTTSPCK_RM_LOG} compress ${DBSTTSPCK_RM} echo 'end time of compression' >> ${DBSTTSPCK_RM_LOG}

60 dbsttspck_rm pwd >> ${DBSTTSPCK_RM_LOG}
df -k . >> ${DBSTTSPCK_RM_LOG} echo ${DBSTTSPCK_RM_COMPRESSED} >> ${DBSTTSPCK_RM_LOG} ls -l >> ${DBSTTSPCK_RM_LOG} df -k >> ${DBSTTSPCK_RM_LOG} # # if exporting to pipe -- end of commands to be commented out # output info on existing snapshots echo ' ' >> ${DBSTTSPCK_RM_LOG} echo '*****************************************' >> ${DBSTTSPCK_RM_LOG} echo 'Existing snapshots' >> ${DBSTTSPCK_RM_LOG} ${ORACLE_HOME}/bin/sqlplus perfstat/perfstat << EOF set termout off set echo on spool ${SQLPLUS_TMP} select to_char(sysdate,' dd Mon YYYY HH24:mi:ss') from dual; select min(snap_id) from stats\$snapshot; select max(snap_id) from stats\$snapshot; select max(snap_id) from stats\$snapshot where snap_time < sysdate-${RETENTION_TIME_DAYS}; select name, snap_id, to_char(snap_time,' dd Mon YYYY HH24:mi:ss') from stats\$snapshot, v\$database order by snap_id; exit EOF

61 dbsttspck_rm echo ' ' >> ${DBSTTSPCK_RM_LOG}
cat ${SQLPLUS_TMP} >> ${DBSTTSPCK_RM_LOG} rm ${SQLPLUS_TMP} # # test to see if truncation is needed # if ${TRUNC_SNAPID} is not the null string, return true (0) # if ${TRUNC_SNAPID} is null, that means there aren't any snapshots that # are older than ${RETENTION_TIME_DAYS} and we don't want to execute sppurge if test ${TRUNC_SNAPID} then echo '*****************' >> ${DBSTTSPCK_RM_LOG} echo 'Truncation Needed' >> ${DBSTTSPCK_RM_LOG} # truncate existing snapshot data to remove snapshots older than ${RETENTION_TIME_DAYS} days echo '${ORACLE_HOME}/bin/sqlplus perfstat/perfstat << EOF' > ${SQLPLUS_TMP} echo >> ${SQLPLUS_TMP} echo ${MIN_SNAPID} >> ${SQLPLUS_TMP} echo ${TRUNC_SNAPID} >> ${SQLPLUS_TMP} echo 'commit;' >> ${SQLPLUS_TMP} echo 'EOF' >> ${SQLPLUS_TMP} chmod 744 ${SQLPLUS_TMP}

62 dbsttspck_rm echo ' ' >> ${DBSTTSPCK_RM_LOG}
echo 'Purge existing STATSPACK data older than '${RETENTION_TIME_DAYS}' days' >> ${DBSTTSPCK_RM_LOG} echo 'SQL script to be executed to execute sppurge' >> ${DBSTTSPCK_RM_LOG} cat ${SQLPLUS_TMP} >> ${DBSTTSPCK_RM_LOG} echo '*********************' >> ${DBSTTSPCK_RM_LOG} echo 'start time of sppurge' >> ${DBSTTSPCK_RM_LOG} date >> ${DBSTTSPCK_RM_LOG} ${SQLPLUS_TMP} >> ${DBSTTSPCK_RM_LOG} echo '*******************' >> ${DBSTTSPCK_RM_LOG} echo 'end time of sppurge' >> ${DBSTTSPCK_RM_LOG} rm ${SQLPLUS_TMP} >> ${DBSTTSPCK_RM_LOG} # # not truncating snapshot data... else

63 dbsttspck_rm echo ' ' >> ${DBSTTSPCK_RM_LOG}
echo 'No Truncation Needed -- TRUNC_SNAPID does not exist' >> ${DBSTTSPCK_RM_LOG} fi # # end of if then block for truncating snapshot data # output info on remaining snapshots echo '*****************************************' >> ${DBSTTSPCK_RM_LOG} echo 'Remaining snapshots' >> ${DBSTTSPCK_RM_LOG} ${ORACLE_HOME}/bin/sqlplus perfstat/perfstat << EOF set termout off set echo on spool ${SQLPLUS_TMP} select to_char(sysdate,' dd Mon YYYY HH24:mi:ss') from dual; select min(snap_id) from stats\$snapshot; select max(snap_id) from stats\$snapshot; exit EOF cat ${SQLPLUS_TMP} >> ${DBSTTSPCK_RM_LOG}

64 dbsttspck_rm rm ${SQLPLUS_TMP} >> ${DBSTTSPCK_RM_LOG} #
# remove old log and export files echo ' ' >> ${DBSTTSPCK_RM_LOG} echo '*****************************************' >> ${DBSTTSPCK_RM_LOG} echo 'Existing dbsttspck_rm log files...' >> ${DBSTTSPCK_RM_LOG} echo 'current sysdate' >> ${DBSTTSPCK_RM_LOG} date >> ${DBSTTSPCK_RM_LOG} echo 'cd ${LOGDIR} ' >> ${DBSTTSPCK_RM_LOG} cd ${LOGDIR} pwd >> ${DBSTTSPCK_RM_LOG} ls -l >> ${DBSTTSPCK_RM_LOG} echo 'Remove dbsttspck_rm log files older than '${RETENTION_TIME_DAYS}' days' >> ${DBSTTSPCK_RM_LOG} /usr/bin/find . -name "DBSTTSPCK_RM_${ORACLE_SID}_*.out" -ctime +${RETENTION_TIME_DAYS} -exec /bin/rm {} \;

65 dbsttspck_rm echo ' ' >> ${DBSTTSPCK_RM_LOG}
echo 'Existing dbsttspck_rm export files...' >> ${DBSTTSPCK_RM_LOG} echo 'current sysdate' >> ${DBSTTSPCK_RM_LOG} date >> ${DBSTTSPCK_RM_LOG} echo 'cd ${EXPORTDIR}' >> ${DBSTTSPCK_RM_LOG} cd ${EXPORTDIR} pwd >> ${DBSTTSPCK_RM_LOG} ls -l >> ${DBSTTSPCK_RM_LOG} echo 'Remove dbsttspck_rm export files older than '${RETENTION_TIME_DAYS}' days' >> ${DBSTTSPCK_RM_LOG} /usr/bin/find . -name "DBSTTSPCK_RM_${ORACLE_SID}_*.exp*" -ctime +${RETENTION_TIME_DAYS} -exec /bin/rm {} \; echo 'End of dbsttspck_rm' >> ${DBSTTSPCK_RM_LOG} MSG='DBSTTSPCK_RM for '`hostname`':'${ORACLE_SID}' '`date +\%m\%d\%Y_\%T` /usr/bin/mailx -s "${MSG}" ${NOTIFY_LIST} < ${DBSTTSPCK_RM_LOG} # #end of dbsttspck_rm

66 sppurge.sql sppurge doesn’t delete SQL text SQL to do this
Can be resource intensive Is in sppurge.sql commented out Execute this SQL manually or un-comment in sppurge.sql

67 Installation Process mkdir -p $ORACLE_HOME/admin/bin
mkdir -p $ORACLE_HOME/admin/dbdoc_log mkdir -p $ORACLE_HOME/admin/dbexport_exp mkdir -p $ORACLE_HOME/admin/dbexport_log mkdir -p $ORACLE_HOME/admin/dbsttspck_rm_exp mkdir -p $ORACLE_HOME/admin/dbsttspck_rm_log cd $ORACLE_HOME/admin/bin ftp daylight.sfbay cd A_My_Documents/DBA/bin bin get dbdoc get dbexport get dbsttspck get dbsttspck_rm chmod 744 dbdoc chmod 744 dbexport chmod 744 dbsttspck chmod 744 dbsttspck_rm mv dbdoc dbdoc_<SID> mv dbexport dbeport_<SID> mv dbsttspck dbsttspck_<SID> mv dbsttspck_rm dbsttspck_rm_<SID>

68 Installation Process dbdoc parameters to setup... ORACLE_SID=
ORACLE_HOME= NOTIFY_LIST=‘< address>' PATH=/bin:/usr/bin:/usr/ccs/bin:/usr/sbin:${ORACLE_HOME}/bin:. NLS_LANG=AMERICAN_AMERICA.US7ASCII, .WE8ISO8859P1, .UTF8 LOGDIR=$ORACLE_HOME/admin/dbdoc_log RETENTION_TIME_DAYS=<number of days> DBDOCLOG=${LOGDIR}/DBDOC_${ORACLE_SID}_`date +\%m\%d\%Y_\%T`.out CSHRC_FILE=~/.cshrc INIT_FILE_PATH=$ORACLE_HOME/admin/pfile CRDB_FILE_PATH=$ORACLE_HOME/admin/sql LISTENER_PATH=$ORACLE_HOME/network/admin TNSNAMES_PATH=$ORACLE_HOME/admin/network additional dbexport parameters to setup... EXPORTDIR=$ORACLE_HOME/admin/dbexport_exp LOGDIR=$ORACLE_HOME/admin/dbexport_log

69 Installation Process --> note: dbexport uses '/' as the user/password for export -- if the db is not configured with os_authent_prefix='OPS$' then you need to change the SQL in the crdb script to create the OPS$ORACLE user. The os_authent_prefix and the characters before the username must match (OPS$ and OPS$ORACLE for example) --> to create the OPS$ORACLE user create user ops$oracle identified externally; grant dba to ops$oracle; alter user ops$oracle temporary tablespace temp; connect /; @?/rdbms/admin/catdbsyn.sql; --> need to install STATSPACK for the dbsttspck, dbsttspck_rm cron jobs... svrmgrl connect internal create tablespace perfstat datafile '/xxx/xxx/perfstat_01.dbf' size 750M extent management local uniform size 40K; cd $ORACLE_HOME/rdbms/admin sqlplus sys @spcreate.sql

70 crontab Entries 0 15 * * * /db/oracle/app/oracle/product/8.1.7/admin/bin/dbdoc_ISEDB > /dev/null 2>&1 30 15 * * * /db/oracle/app/oracle/product/8.1.7/admin/bin/dbexport_ISEDB > /dev/null 2>&1 0 16 * * 0 /db/oracle/app/oracle/product/8.1.7/admin/bin/dbsttspck_rm_ISEDB > /dev/null 2>&1 0,15,30,45 * * * * /db/oracle/app/oracle/product/8.1.7/admin/bin/dbsttspck_ISEDB > /dev/null 2>&1

71 Db Vendor Independent Same process used for
Sybase databases MySQL databases Modify cron scripts to use vendor specific SQL commands


Download ppt "Managing Multiple Databases Without Multiple DBAs"

Similar presentations


Ads by Google