Session Overview » The Cost-Based Optimizer » Initialization Parameters » The Importance of DB-Statistics » Troubleshooting Bad Performance » The Performance Report » Further Tuning Tips
About Forward-Looking Statements » We may make statements regarding our product development and service offering initiatives, including the content of future product upgrades, updates or functionality in development. While such statements represent our current intentions, they may be modified, delayed or abandoned without prior notice and there is no assurance that such offering, upgrades, updates or functionality will become available unless and until they have been made generally available to our customers.
What does the CBO do? » Parses query and generates lots of different execution plans for it » Estimates costs for each plan and chooses cheapest option so far » Bases its calculations on what it knows about your system: statistics about your database, and init-parameters
When the CBO doesnt work… » The CBO can only make the right decisions if it has enough information about your system and your data » missing/outdated statistics are deadly » So are bad init parameters » Shipping defaults are miserable!
Initialization Parameter Files » stored in $ORACLE_HOME/dbs/ » Oracle8i: init$ORACLE_SID.ora (pfile=parameter file) editable and human-readable text » Oracle9i/10g: spfile$ORACLE_SID.ora (spfile=server parameter file) binary, non-editable » advantage of spfile: can dynamically alter system and save changes to spfile to preserve them through a DB restart
Changing an init parameter (9i) » --to test a parameter: » ALTER SESSION SET param=value; » --to make semi-persistent change on running system: » ALTER SYSTEM SET param=value SCOPE=MEMORY; » --to make a change in spfile only, i.e. only becomes effective after DB restart (test first!): » ALTER SYSTEM SET param=value SCOPE=SPFILE; » --to do both, after the value has been tested (!): » ALTER SYSTEM SET param=value SCOPE=BOTH;
pfile vs. spfile » Need to keep editable version (pfile) and backup copy to protect against mistakes that make system nonfunctional (a named spfile) » oracle# sqlplus / AS SYSDBA CREATE pfile=initSID.ora FROM spfile; » Similarly: CREATE spfile=… FROM pfile=… » To use a named backup spfile if the main spfile is corrupt, we can start the DB with a pfile containing only one parameter: spfile=… » Must first remove the bad default spfile though
Memory Management » Sizing the SGA & PGA » Let Oracle do it (9i/10g): sga_max_size=(what it mustnt get above) 10g: sga_target=(what youd like it to be) pga_aggregate_target=(desired total size) workarea_size_policy=AUTO (!!!) » Should use only dedicated server! » Do NOT set legacy parameters like hash_area_size, sort_area_size (in 9i, theyre only for shared server)
Dedicated server » Dedicated Server = one server process oracle$ORACLE_SID per client connection (TNS). Proper way of operating with Blackboard, since pooling is already done on client side (Java connection pool; few fixed Perl DBI connections per modperl process).
Shared server » Shared Server (formerly known as MTS=Multi-threaded server) = small pool of server processes (ora_s###_$SID) held ready to serve incoming requests. Used mainly in situations were DB-server memory is scarce. Dispatchers (ora_d###_$SID) hook up client requests to servers.
Why not shared server? » Shared Server connections demand that work be submitted in small chunks, i.e. application code needs to be specially prepared for that » So set (max_)shared_servers=0 and max_dispatchers=0, and reset/remove any legacy mts_... Parameters.
Memory Management » Ora10g: sga_target is all you need with Automatic Shared Memory Management (ASMM) if statistics_level=typical or all » Ora9i: sga_max_size must be db_cache_size + log_buffer + shared_pool_size + large_pool_size » But: sga_max_size is only upper bound! » No automatic redistribution in 9i! Still need to tune the various buffer sizes…
Finding the right pool sizes » There is no golden rule » Start with reasonably low values » Use BB perf report to see what values to adjust (section with advice about current params and suggested changes) » Make 10% adjustments, run it again » Details later in this presentation
How big should the PGA be? » Each BB connection = 1 ora server process (using dedicated server!) » 8i: PGA= #processes * size/process, for each proc add bitmap_merge_area_size + hash_area_size + sort_area_size » 9i+: PGA_AGGREGATE_TARGET=200MB means each of 200 processes has 1MB in theory, but in reality as much as it needs, dynamically allocated from the 200MB
Monitoring PGA usage » Initially set to 16% RAM (Oracle Corp.) » Use V$PGASTAT to monitor over allocation count and cache hit percentage » Run normal (!) workload, then raise PGA_AGGREGATE_TARGET as suggested by V$PGA_TARGET_ADVICE to achieve no over allocations and high cache hit percentage, restart, check again
(Mis-)Guiding the Optimizer » bad shipping defaults in Oracle9i: » optimizer_index_caching=0 says: you dont normally have any index blocks cached in RAM (percent-value) » More realistic: (in SSO: 50-60) » optimizer_index_cost_adj=100 says: index-access is just as expensive as full table scans » More realistic: (i.e. cost is 1/fifth or so) » Note: whats realistic depends on SGA tuning
(Mis-)Guiding the Optimizer » These params must be edited in pfile/spfile, cannot ALTER SYSTEM, so need DB restart » but can ALTER SESSION to test/try new values » Want to know what the true values are, i.e. what Oracle observed physically? » DBMS_STATS.GATHER_SYSTEM_STATS (must be run during TYPICAL workload) » (overrides your init params; to clean up if problems result run DELETE_SYSTEM_STATS)
Guiding the Optimizer » optimizer_mode=CHOOSE lets CBO decide (based on stats) whether to quickly return some rows or optimize for complete result set » optimizer_features_enable=9.2.0 (minimum) can temporarily turn this down for troubleshooting bad CBO decisions » cursor_sharing=SIMILAR (or FORCE) improves plans on skewed columns » optimizer_dynamic_sampling=2 (later…) » timed_statistics=TRUE (default) » statistics_level=TYPICAL (default)
More than meets the eye » 9i/10g manage PGA/SGA automatically » …but legacy parameters can break this! » Are your current parameters defaults? V$PARAMETER.isdefault, V$SPPARAMETER.isspecified » Check whether you set something you dont really need, e.g. hash_area_size » Run your DB with spfile, not pfile
What are database statistics? » Physical info about your DB, e.g. number of rows, index depth, number of leaf blocks in index tree etc. » Helps CBO determine how many rows to expect from a (sub-)query » Includes histogram data to judge unbalanced distribution, e.g. uncommonly frequent values
Status Quo: analyze_my » Blackboard installs a stats gathering job via its own analyze_my package » Uses ANALYZE table method » Job runs analyzes tables only » In Oracle 9i+, job breaks w. default permissions, since analyze_my needs access to V_$PARAMETER in SYS schema » SELECT what, broken, failures, schema_user, last_date FROM dba_jobs;
Granting Dictionary Access » Oracle8i: no issue: o7_dictionary_accessibility is on » Can run 9i with this same init parameter (gives all DB users select access to all SYS objects) » Or: GRANT SELECT ANY DICTIONARY TO BB_BB60; then same command for BBADMIN, BB_BB60_STATS (does same thing for Blackboard users only) » Or: GRANT SELECT_CATALOG_ROLE TO …; GRANT SELECT ON V_$PARAMETER TO …; (most specific option, lets Blackboard users access SYS objects in interactive sessions, and the one object needed for analyze_my within stored procedures, where roles dont work)
Fixing the broken jobs » After granting necessary privileges, need to recompile packages » ALTER PACKAGE bb_bb60.analyze_my COMPILE; (etc.) » Now job is ready to be re-scheduled, need to do that as each Blackboard user, e.g. connect as bb_bb60, then run EXEC DBMS_JOB.BROKEN(job#,FALSE); » May also need to re-SUBMIT it for tomorrow » Find job# by querying user_jobs
ANALYZE is OUT » DBMS_STATS preferred over ANALYZE » New approach: GATHER_SCHEMA_STATS( ownname=>BB_BB60, cascade=>TRUE, method_opt=>FOR ALL INDEXED COLUMNS SIZE AUTO); » cascade analyzes indexes, method_opt controls histogram generation, size auto means get detailed histograms only on columns we actually need them for (see sys.col_usage$ view)
Scheduling DBMS_STATS call » Schedule weekly for Sunday morning » Will likely replace analyze_my in the future » Makes huge difference for skewed columns due to histogram generation » Can be scheduled as DBA or as schema owner, does not depend on DB users having data dictionary access » Can augment with (bi-)nightly GATHER_SCHEMA_STATS( …, options=>GATHER STALE) to replace nightly analyze_my runs » GATHER STALE requires that tables are monitored via EXEC DBMS_STATS.ALTER_SCHEMA_TAB_MONITORING(BB_BB60); » Table monitoring is also needed for SIZE AUTO clause
Histograms? Say what? » They tell CBO about typical data values » e.g. Y and N should be equally likely, but look at layout.default_ind => only ~1 row has default_ind=Y, rest is N » As skewed as it gets! » CBO needs histogram to choose index when looking for default_ind=Y, i.e. when looking for default portal layout (most common case!) » Note: histograms are only beneficial on skewed columns, mostly on indexed ones
Why is the DB so busy? » Look at v$session_longops to see whats keeping the DB busy » Perf report has memory hogs, I/O hogs » Get explain plans for those queries » Analyze why CBO chooses full table scans when it shouldnt » Does GATHER_SYSTEM_STATS fix it?
Are you missing statistics? » Stats gathering jobs might be broken » Check last_analyzed in user_tables, user_indexes » Use the optimizer_dynamic_sampling=2 init param; protects against missing stats; does fast dynamic sampling in memory when stats are missing on the tables involved in query; also gathers dynamic stats on temporary tables » analyze table(s) on the fly, does that fix it? » Do you have histograms? Check user_histograms
Explain Plan creates PLAN_TABLE queries it after an EXPLAIN » EXPLAIN PLAN FOR query » Or use AUTOTRACE for automatic explaining of all your sessions queries /autotrace.html /autotrace.html
Tools for more CBO info » Statspack » tkprof » trace » Beyond the scope of this session
How to run Perf Report » tools/perf_reports/run_reports.sh (run as root or bbuser) » output in logs/perf_reports/ » depends on: GRANT SELECT ANY DICTIONARY TO BBADMIN » Dont run *often*, dont run 2 in parallel » Tuning advice based on workload since DB startup, should be typical
Some bugs in Perf Report in 6.3 and 7.0 » run_sql_reports.sh: comment out the line setting ORA_NLS33 (to a non-existing path) » get_tomcat_trace.sh, line 31: delete that extraneous colon » get_os_stats.sh, line 16: delete /usr/ucb/ for Solaris » These dont apply to our topic though » Can also run queries in perf_reports/sql manually from sqlplus, spool output to file
Ask Tom – and RTFM » Reference manuals are your friend » Oracle makes full set of manuals available for free download » Learn more at » Expert One-on-One Oracle (T.Kyte) » Effective Oracle by Design (T.Kyte) » Expert Oracle Database Architecture (Kyte)
Useful links » J.M. Hunter excellent links & plenty of good articleshttp://www.idevelopment.info » Jonathan Lewis » Howard Rogers » Steve Adams » Beware of self-pronounced experts that dont document results » Technical forums can help & hurt
Wrapping up » Give the optimizer accurate info, and it will find the right plans » Correct statistics are the foundation » Monitoring SGA needs is secondary! » Watch for outdated settings » Blackboard Support helps with catastrophic problems only, Consulting Services does fine-tuning