Oracle Database Performance Secrets Finally Revealed Greg Rahn & Michael Hallas Oracle Real-World Performance Group Server Technologies.

Slides:



Advertisements
Similar presentations
Youre Smarter than a Database Overcoming the optimizers bad cardinality estimates.
Advertisements

1.
Tuning Oracle SQL The Basics of Efficient SQLThe Basics of Efficient SQL Common Sense Indexing The Optimizer –Making SQL Efficient Finding Problem Queries.
Presented By Akin S Walter-Johnson Ms Principal PeerLabs, Inc
Introduction to SQL Tuning Brown Bag Three essential concepts.
SQL Tuning Briefing Null is not equal to null but null is null.
13 Copyright © 2005, Oracle. All rights reserved. Monitoring and Improving Performance.
Module 13: Performance Tuning. Overview Performance tuning methodologies Instance level Database level Application level Overview of tools and techniques.
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 5 More SQL: Complex Queries, Triggers, Views, and Schema Modification.
What Happens when a SQL statement is issued?
1 Copyright © 2011, Oracle and/or its affiliates. All rights reserved.
SQL Performance 2011/12 Joe Chang, SolidQ
Module 17 Tracing Access to SQL Server 2008 R2. Module Overview Capturing Activity using SQL Server Profiler Improving Performance with the Database Engine.
David Konopnicki Choosing Access Path ä The basic methods. ä The access paths and when they are available. ä How the optimizer chooses among the.
Virtual techdays INDIA │ 9-11 February 2011 SQL 2008 Query Tuning Praveen Srivatsa │ Principal SME – StudyDesk91 │ Director, AsthraSoft Consulting │ Microsoft.
Database Systems: Design, Implementation, and Management Eighth Edition Chapter 11 Database Performance Tuning and Query Optimization.
Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | Oracle SQL Developer What’s New in Version 4.1 Jeff Smith
Oracle 10g Database Administrator: Implementation and Administration
Copyright © 2013, Oracle and/or its affiliates. All rights reserved. 1 Preview of Oracle Database 12 c In-Memory Option Thomas Kyte
AN INTRODUCTION TO EXECUTION PLAN OF QUERIES These slides have been adapted from a presentation originally made by ORACLE. The full set of original slides.
Relational Database Performance CSCI 6442 Copyright 2013, David C. Roberts, all rights reserved.
Executing Explain Plans and Explaining Execution Plans Craig Martin 01/20/2011.
A few things about the Optimizer Thomas Kyte
Troubleshooting SQL Server Enterprise Geodatabase Performance Issues
Database Systems: Design, Implementation, and Management Eighth Edition Chapter 10 Database Performance Tuning and Query Optimization.
Oracle Database Administration Lecture 6 Indexes, Optimizer, Hints.
Physical Database Design & Performance. Optimizing for Query Performance For DBs with high retrieval traffic as compared to maintenance traffic, optimizing.
Module 7 Reading SQL Server® 2008 R2 Execution Plans.
Ashwani Roy Understanding Graphical Execution Plans Level 200.
Oracle9i Performance Tuning Chapter 1 Performance Tuning Overview.
The Model Clause explained Tony Hasler, UKOUG Birmingham 2012 Tony Hasler, Anvil Computer Services Ltd.
RBO RIP George Lumpkin Director Product Management Oracle Corporation Session id:
1 Chapter 7 Optimizing the Optimizer. 2 The Oracle Optimizer is… About query optimization Is a sophisticated set of algorithms Choosing the fastest approach.
Oracle Tuning Considerations. Agenda Why Tune ? Why Tune ? Ways to Improve Performance Ways to Improve Performance Hardware Hardware Software Software.
SQL Tuning made much easier with SQLTXPLAIN (SQLT) Mauro Pagano Senior Principal Technical Support Engineer Oracle Confidential – Internal/Restricted/Highly.
11-1 Improve response time of interactive programs. Improve batch throughput. To ensure scalability of applications load vs. performance. Reduce system.
Quick Tips for Database Performance Tuning Sergey Koltakov Kurt Engeleiter Product Manager.
1 Chapter 10 Joins and Subqueries. 2 Joins & Subqueries Joins – Methods to combine data from multiple tables – Optimizer information can be limited based.
Oracle tuning: a tutorial Saikat Chakraborty. Introduction In this session we will try to learn how to write optimized SQL statements in Oracle 8i We.
1. When things go wrong: how to find SQL error Sveta Smirnova Principle Technical Support Engineer, Oracle.
Module 4 Database SQL Tuning Section 3 Application Performance.
1 Chapter 13 Parallel SQL. 2 Understanding Parallel SQL Enables a SQL statement to be: – Split into multiple threads – Each thread processed simultaneously.
Physical Database Design Purpose- translate the logical description of data into the technical specifications for storing and retrieving data Goal - create.
J.NemecAre Your Statistics Bad Enough?1 Verify the effectiveness of gathering optimizer statistics Jaromir D.B. Nemec UKOUG
Chapter 5 Index and Clustering
Query Optimization CMPE 226 Database Systems By, Arjun Gangisetty
Sorting and Joining.
Oracle9i Developer: PL/SQL Programming Chapter 11 Performance Tuning.
8 Copyright © 2005, Oracle. All rights reserved. Gathering Statistics.
SQL Server Statistics DEMO SQL Server Statistics SREENI JULAKANTI,MCTS.MCITP,MCP. SQL SERVER Database Administration.
SQL Server Statistics DEMO SQL Server Statistics SREENI JULAKANTI,MCTS.MCITP SQL SERVER Database Administration.
Diving into Query Execution Plans ED POLLACK AUTOTASK CORPORATION DATABASE OPTIMIZATION ENGINEER.
High Performance Functions SQLBits VI. Going backwards is faster than going forwards.
Closing the Query Processing Loop in Oracle 11g Allison Lee, Mohamed Zait.
This document is provided for informational purposes only and Microsoft makes no warranties, either express or implied, in this document. Information.
11 Copyright © 2009, Oracle. All rights reserved. Enhancing ETL Performance.
Tuning Oracle SQL The Basics of Efficient SQL Common Sense Indexing
More SQL: Complex Queries, Triggers, Views, and Schema Modification
SQL Server Statistics and its relationship with Query Optimizer
Query Tuning without Production Data
Scaling SQL with different approaches
Query Tuning without Production Data
Query Tuning without Production Data
Choosing Access Path The basic methods.
Database Performance Tuning and Query Optimization
Charles Phillips screen
Cardinality Estimator 2014/2016
Statistics: What are they and How do I use them
Chapter 11 Database Performance Tuning and Query Optimization
Introduction to the Optimizer
Presentation transcript:

Oracle Database Performance Secrets Finally Revealed Greg Rahn & Michael Hallas Oracle Real-World Performance Group Server Technologies

3 About The Real-World Performance Group

4 Part of Server Technologies Competitive customer benchmarks Performance escalations

5 The secret to a chef’s masterpiece is all in the recipe

6 Agenda Troubleshooting approach Intro to the optimizer Demonstration Summary of secrets and lessons learned

7 What is your performance troubleshooting approach?

8 Do you... Use Google as your first resource? Use the Oracle Documentation as your second resource? Believe in tuning databases by changing block size? Frequently use database parameters to tune queries? Believe in “Silver Bullet Tuning”? Blindly apply previously successful solutions? Practice the “Change and Test” troubleshooting approach?

9 The Change and Test Troubleshooting Approach

10 What really matters for troubleshooting performance?

11 Quite simply... it's all in the approach.

12 Stack Visualization

13 The Systematic Troubleshooting Approach 1. Define the problem, the scope and identify symptoms 2. Collect and analyze data/metrics 3. Ask “Why?” five times to identify root cause 4. Understand and devise change 5. Make a single change 6. Observe and measure results 7. If necessary, back out change; repeat process

14 Systematic Troubleshooting Guidelines Don’t just look inside the database, look outside as well Make exactly one change at a time Scope of solution matches scope of problem Choose the right tool for the job Carefully document change and impact Suppressing the problem is not the same as root cause Realize that you may not be able to get to the root cause in just one step

15 Fix on Failure vs. Fix it Forever The benefits of root cause analysis Fix on Failure – Finger pointing and the blame game – Stressful for everyone – Never time to fix it right the first time, but always plenty of time to keep fixing it time and time again Fix it Forever – Identify root causes of problems, so permanent solutions can be implemented – Develop a logical, systematic and data driven approach to problem solving

16 Example of Applying the “5 Whys”

17 Applying the “5 Whys” to My batch job ran long last night 1. Why? - A specific query took 5x as long 2. Why? - Execution plan changed from HJ to NLJ 3. Why? - Query optimizer costed the NLJ to be cheaper 4. Why? - Variables involved in the costing have changed 5. Why? - Statistics were gathered with wrong options

18 Choosing Different Levels of Scope System level – database parameters – alter system – object statistics Session level – alter session Statement level – hints – SQL profiles & outlines & baselines

19 Performance Troubleshooting Toolbox ADDM, AWR, ASH reports and raw data SQL Monitoring Active Report (11g) DBMS_XPLAN SQL Trace V$SESSTAT V$SESSION Stack dumps (pstack) OS metrics tools (collectl, iostat, vmstat, mpstat, etc.)

20 Quick Introduction To The Optimizer

21 Good cardinality estimates generally result in a good plan, however, bad cardinality estimates do not always result in a bad plan An Important Note About Cardinality Estimates

22 Introducing the Cost-Based Optimizer Cost and Cardinality Cardinality – Estimated number of rows returned from a join, a table or an index – Factors influencing cardinality Query predicates and query variables Object statistics

23 Introducing the Optimizer Cost and Cardinality Cost – Representation of resource consumption CPU Disk I/O Memory Network I/O – Factors influencing cost Cardinality and selectivity Cost model Parameters System statistics

24 Good SQL and Bad SQL Good SQL – SQL that makes it possible for the optimizer to produce a good cardinality estimate – select * from emp where ename != ‘KING’ Bad SQL – SQL that makes it difficult for the optimizer to produce a good cardinality estimate – select * from emp where replace(ename, ‘KING’) is not null

25 Good Plans and Bad Plans Good Plan – Efficient retrieval or modification of the desired rows – Highly selective index to retrieve few rows from a large table – Scan to retrieve many rows from a large table Bad Plan – Inefficient retrieval or modification of the desired rows – Scan to retrieve few rows from a large table – Non-selective index to retrieve many rows from a large table

26 What is a Query Plan? Access Path – Table scan – Index { fast full | full | range | skip | unique } scan Join Method – Hash – Merge – Nested loops Join Order Distribution Method – Broadcast – Hash

27 Challenges in Cardinality Estimation Complex predicates Correlation Non-representative bind values Out of range predicates Skew Statistics Quality – Frequency – Histograms – Sample Size

28 What Is Dynamic Sampling? Improves quality of cardinality estimates Objects with no statistics – Avoids the use of heuristics – Less complete than statistics stored in the dictionary Objects with existing statistics – Predicates with complex expressions

29 Demonstration

30 SQL> select count(*) from emp; COUNT(*) Execution Plan Plan hash value: | Id | Operation | Name | Rows | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 1 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | | | | 2 | TABLE ACCESS STORAGE FULL| EMP | 14 | 3 (0)| 00:00:01 | Note dynamic sampling used for this statement (level=2) Dynamic Sampling Counting Rows

31 SQL> select ename from emp; ENAME SMITH … MILLER 14 rows selected. Execution Plan Plan hash value: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 14 | | 3 (0)| 00:00:01 | | 1 | TABLE ACCESS STORAGE FULL| EMP | 14 | | 3 (0)| 00:00:01 | Note dynamic sampling used for this statement (level=2) Dynamic Sampling Projection

32 SQL> select deptno, count(*) from emp group by deptno; DEPTNO COUNT(*) Execution Plan Plan hash value: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 14 | 182 | 4 (25)| 00:00:01 | | 1 | HASH GROUP BY | | 14 | 182 | 4 (25)| 00:00:01 | | 2 | TABLE ACCESS STORAGE FULL| EMP | 14 | 182 | 3 (0)| 00:00:01 | Note dynamic sampling used for this statement (level=2) Dynamic Sampling Aggregation

33 SQL> select dbms_flashback.get_system_change_number scn from dual; SQL> select ename from emp as of scn &get_scn.; ENAME SMITH … MILLER 14 rows selected. Execution Plan Plan hash value: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 409 | 799K| 6 (0)| 00:00:01 | | 1 | TABLE ACCESS STORAGE FULL| EMP | 409 | 799K| 6 (0)| 00:00:01 | Dynamic Sampling Flashback

34 SQL> create table emp_ext organization external 2 (type ORACLE_DATAPUMP default directory DATA_PUMP_DIR location ('emp_ext.dmp')) 3 as select * from emp; SQL> select ename from emp_ext; ENAME SMITH … MILLER 14 rows selected. Execution Plan Plan hash value: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 8168 | 15M| 29 (0)| 00:00:01 | | 1 | EXTERNAL TABLE ACCESS FULL| EMP_EXT | 8168 | 15M| 29 (0)| 00:00:01 | Dynamic Sampling External Table

35 SQL> create table emp_hash partition by hash(empno) (partitions 1024) as select * from emp; SQL> select ename from emp_hash; ENAME ALLEN … JONES 14 rows selected. Execution Plan Plan hash value: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | | 0 | SELECT STATEMENT | | | 166M| 291 (1)| 00:00:04 | | | | 1 | PARTITION HASH ALL | | | 166M| 291 (1)| 00:00:04 | 1 | 1024 | | 2 | TABLE ACCESS STORAGE FULL| EMP_HASH | | 166M| 291 (1)| 00:00:04 | 1 | 1024 | Dynamic Sampling Empty Partitions

36 SQL> exec dbms_stats.gather_table_stats(USER,'EMP_EXT',estimate_percent=>NULL); SQL> exec dbms_stats.gather_table_stats(USER,'EMP'); SQL> select ename from emp_ext; ENAME SMITH … MILLER 14 rows selected. Execution Plan Plan hash value: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 14 | 84 | 2 (0)| 00:00:01 | | 1 | EXTERNAL TABLE ACCESS FULL| EMP_EXT | 14 | 84 | 2 (0)| 00:00:01 | Statistics External Table

37 SQL> select ename from emp as of scn &get_scn.; ENAME SMITH … MILLER 14 rows selected. Execution Plan Plan hash value: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 14 | 84 | 6 (0)| 00:00:01 | | 1 | TABLE ACCESS STORAGE FULL| EMP | 14 | 84 | 6 (0)| 00:00:01 | Statistics Flashback

38 SQL> select deptno, count(*) from emp group by deptno; DEPTNO COUNT(*) Execution Plan Plan hash value: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 3 | 9 | 4 (25)| 00:00:01 | | 1 | HASH GROUP BY | | 3 | 9 | 4 (25)| 00:00:01 | | 2 | TABLE ACCESS STORAGE FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | Statistics Aggregation

39 SQL> select ename from emp; ENAME SMITH … MILLER 14 rows selected. Execution Plan Plan hash value: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 14 | 84 | 3 (0)| 00:00:01 | | 1 | TABLE ACCESS STORAGE FULL| EMP | 14 | 84 | 3 (0)| 00:00:01 | Statistics Projection

40 Parallel Query Example Good SQL SQL> select /*+ MONITOR */ 2 count(p1.text) "in may" 3,count(p2.text) "under gemini" 4,sum(decode(p2.text,null, 1, 0)) "in may AND under gemini" 5 from 6 (select person_id, 'birthdays in may' as text from persons where month = 'may') p1 7,(select person_id, 'birthdays under gemini' as text from persons where zodiac = 'gemini') p2 8 where p1.person_id = p2.person_id(+); in may under gemini in may AND under gemini Elapsed: 00:00:05.28

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| TQ |IN-OUT| PQ Distrib | | 0 | SELECT STATEMENT | | 1 | 42 | 123K (1)| | | | | 1 | SORT AGGREGATE | | 1 | 42 | | | | | | 2 | PX COORDINATOR | | | | | | | | | 3 | PX SEND QC (RANDOM) | :TQ10002 | 1 | 42 | | Q1,02 | P->S | QC (RAND) | | 4 | SORT AGGREGATE | | 1 | 42 | | Q1,02 | PCWP | | |* 5 | HASH JOIN OUTER | | 32M| 1286M| 123K (1)| Q1,02 | PCWP | | | 6 | PX RECEIVE | | 32M| 459M| (1)| Q1,02 | PCWP | | | 7 | PX SEND HASH | :TQ10000 | 32M| 459M| (1)| Q1,00 | P->P | HASH | | 8 | PX BLOCK ITERATOR | | 32M| 459M| (1)| Q1,00 | PCWC | | |* 9 | TABLE ACCESS STORAGE FULL| PERSONS | 32M| 459M| (1)| Q1,00 | PCWP | | | 10 | PX RECEIVE | | 31M| 798M| (1)| Q1,02 | PCWP | | | 11 | PX SEND HASH | :TQ10001 | 31M| 798M| (1)| Q1,01 | P->P | HASH | | 12 | PX BLOCK ITERATOR | | 31M| 798M| (1)| Q1,01 | PCWC | | |* 13 | TABLE ACCESS STORAGE FULL| PERSONS | 31M| 798M| (1)| Q1,01 | PCWP | | Parallel Query Example Good SQL

42 SQL> select /*+ MONITOR */ 2 count(p1.text) "in may" 3,count(p2.text) "under gemini" 4,sum(decode(p2.text,null, 1, 0)) "in may AND under gemini" 5 from 6 (select person_id, 'birthdays in may' as text from persons where month = 'may') p1 7,(select person_id, 'birthdays under gemini' as text from persons where month in ('may', 'june') and lower(zodiac) = 'gemini') p2 8 where p1.person_id = p2.person_id(+); in may under gemini in may AND under gemini Elapsed: 00:00:27.34 Parallel Query Example Bad SQL

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| TQ |IN-OUT| PQ Distrib | | 0 | SELECT STATEMENT | | 1 | 42 | 123K (1)| | | | | 1 | SORT AGGREGATE | | 1 | 42 | | | | | | 2 | PX COORDINATOR | | | | | | | | | 3 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 42 | | Q1,01 | P->S | QC (RAND) | | 4 | SORT AGGREGATE | | 1 | 42 | | Q1,01 | PCWP | | |* 5 | HASH JOIN RIGHT OUTER | | 32M| 1286M| 123K (1)| Q1,01 | PCWP | | | 6 | PX RECEIVE | | 618K| 15M| (2)| Q1,01 | PCWP | | | 7 | PX SEND BROADCAST | :TQ10000 | 618K| 15M| (2)| Q1,00 | P->P | BROADCAST | | 8 | PX BLOCK ITERATOR | | 618K| 15M| (2)| Q1,00 | PCWC | | |* 9 | TABLE ACCESS STORAGE FULL| PERSONS | 618K| 15M| (2)| Q1,00 | PCWP | | | 10 | PX BLOCK ITERATOR | | 32M| 459M| (1)| Q1,01 | PCWC | | |* 11 | TABLE ACCESS STORAGE FULL | PERSONS | 32M| 459M| (1)| Q1,01 | PCWP | | Parallel Query Example Bad SQL

44 SQL> alter session set optimizer_dynamic_sampling = 2; Session altered. SQL> select /*+ MONITOR */ 2 count(p1.text) "in may" 3,count(p2.text) "under gemini" 4,sum(decode(p2.text,null, 1, 0)) "in may AND under gemini" 5 from 6 (select person_id, 'birthdays in may' as text from persons where month = 'may') p1 7,(select person_id, 'birthdays under gemini' as text from persons where month in ('may', 'june') and lower(zodiac) = 'gemini') p2 8 where p1.person_id = p2.person_id(+); in may under gemini in may AND under gemini Elapsed: 00:00:07.21 Parallel Query Example Bad SQL with Dynamic Sampling

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| TQ |IN-OUT| PQ Distrib | | 0 | SELECT STATEMENT | | 1 | 42 | 124K (1)| | | | | 1 | SORT AGGREGATE | | 1 | 42 | | | | | | 2 | PX COORDINATOR | | | | | | | | | 3 | PX SEND QC (RANDOM) | :TQ10002 | 1 | 42 | | Q1,02 | P->S | QC (RAND) | | 4 | SORT AGGREGATE | | 1 | 42 | | Q1,02 | PCWP | | |* 5 | HASH JOIN OUTER | | 32M| 1287M| 124K (1)| Q1,02 | PCWP | | | 6 | PX RECEIVE | | 32M| 459M| (1)| Q1,02 | PCWP | | | 7 | PX SEND HASH | :TQ10000 | 32M| 459M| (1)| Q1,00 | P->P | HASH | | 8 | PX BLOCK ITERATOR | | 32M| 459M| (1)| Q1,00 | PCWC | | |* 9 | TABLE ACCESS STORAGE FULL| PERSONS | 32M| 459M| (1)| Q1,00 | PCWP | | | 10 | PX RECEIVE | | 43M| 1126M| (2)| Q1,02 | PCWP | | | 11 | PX SEND HASH | :TQ10001 | 43M| 1126M| (2)| Q1,01 | P->P | HASH | | 12 | PX BLOCK ITERATOR | | 43M| 1126M| (2)| Q1,01 | PCWC | | |* 13 | TABLE ACCESS STORAGE FULL| PERSONS | 43M| 1126M| (2)| Q1,01 | PCWP | | … Note dynamic sampling used for this statement (level=6) Parallel Query Example Bad SQL with Dynamic Sampling

46 What Have We Seen? Cardinality Drives Plan Selection ▲ Broadcast ▼ Hash ▼ Broadcast ▲ Hash

47 What Have We Seen? SQL Monitor Report – Ideal tool to use for statement troubleshooting – Can be used on active statements Dynamic Sampling – Good way of getting better cardinality estimates – Be cautious when using DS without table stats – Parallel execution chooses the level automatically (11.2) – RWP used level 4 or 5 for data warehouses (11.1)

48 Effect of Cardinality Estimates Access Methods and Join Methods ▲ Indexes ▲ Nested Loop joins ▼ Hash joins ▼ Merge joins ▼ Scans ▼ Indexes ▼ Nested Loop joins ▲ Hash joins ▲ Merge joins ▲ Scans

49 Initialization Parameters Access Methods and Join Methods ▲ Indexes ▲ Nested Loop joins ▼ Hash joins ▼ Merge joins ▼ Scans ▼ Indexes ▼ Nested Loop joins ▲ Hash joins ▲ Merge joins ▲ Scans

50 Initialization Parameters Access Methods and Join Methods ▲ Indexes ▲ Nested Loop joins ▼ Hash joins ▼ Merge joins ▼ Scans

51 What Have We Seen? SQL Tuning Advisor – Helps identify better plans SQL Profile – “Patch” a plan Generated by SQL Tuning Advisor Identified by the user – Force matching can match literals SQLT/SQLTXPLAIN – coe_xfr_sql_profile.sql – See MOS note

52 What People Think Are Performance Secrets Undocumented parameters Documented parameters Undocumented events Index rebuilds and table reorgs Fiddling with block size Silver Bullets

53 What Are The Real-World Performance Secrets Use a systematic approach, always Let data (metrics, etc.) guide your troubleshooting Match the scope of the problem and solution Realize that you may not be able to get to the root cause in just one step

54 What Are The Real-World Performance Secrets Use a systematic approach, always Isolate as much as possible Let data (metrics, etc.) guide your troubleshooting Use the right tool for the job Match the scope of the problem and solution Make exactly one change at a time Realize that you may not be able to get to the root cause in just one step

55 The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

56