Presentation is loading. Please wait.

Presentation is loading. Please wait.

מימוש מערכות מסדי נתונים (236510)

Similar presentations


Presentation on theme: "מימוש מערכות מסדי נתונים (236510)"— Presentation transcript:

1 מימוש מערכות מסדי נתונים (236510)
Oracle 12c Database Performance Tuning By David Itshak Global Hebrew Virtual PASS Chapter : Sqlsaturday Israel 2016 : Oracle Database 11g: Administration Workshop I

2 Oracle Database 11g: Administration Workshop I 9 - 2
Reference and Credits Oracle® Database Concepts 12c Release 1 (12.1) E Oracle® Database SQL Language Reference 12c Release 1 (12.1) E Oracle® Database SQL Tuning Guide 12c Release 1 (12.1) E Oracle Database 12c Performance Tuning Recipes , Apress Expert Oracle SQL Optimization, Deployment, and Statistics , ISBN , Apress Troubleshooting Oracle Performance 2nd edition , ISBN X , Apress Oracle Database 11g: Administration Workshop I

3 Oracle Database 11g: Administration Workshop I 9 - 3
Agenda Vocabulary Purpose of SQL Tuning Oracle database tuning tools Troubleshooting Poor Performance Oracle Database Tuning Tools Cursors Soft & Hard Parse SQL Processing Query Optimizer Selectivity and cardinality execution Plans Joins Histograms Oracle Database 11g: Administration Workshop I

4 Oracle Database 11g: New Features for Administrators 16 - 4
Vocabulary Bottleneck : Area where resource contention is highest Throughput : Work per unit Response Time : Time to complete a given workload Baseline : Should Capture usage highs and lows Snapshot : Can be overlaid to compare against Baseline(s) Oracle Database 11g: New Features for Administrators

5 Oracle Database 11g: New Features for Administrators 16 - 5
Purpose of SQL Tuning Reduce user response time : decreasing the time between when a user issues a statement and receives a response Improve throughput : using the least amount of resources necessary to process all rows accessed by a statement Purpose of SQL Tuning A SQL statement becomes a problem when it fails to perform according to a predetermined and measurable standard. After you have identified the problem, a typical tuning session has one of the following goals: • Reduce user response time, which means decreasing the time between when a user issues a statement and receives a response • Improve throughput, which means using the least amount of resources necessary to process all rows accessed by a statement For a response time problem, consider an online book seller application that hangs for three minutes after a customer updates the shopping cart. Contrast with a three minute parallel query in a data warehouse that consumes all of the database host CPU, preventing other queries from running. In each case, the user response time is three minutes, but the cause of the problem is different, and so is the tuning goal. parallel query A query in which multiple processes work together simultaneously to run a single SQL query. By dividing the work among multiple processes, Oracle Database can run the statement more quickly. For example, four processes retrieve rows for four different quarters in a year instead of one process handling all four quarters by itself Oracle Database 11g: New Features for Administrators

6 Oracle Database 11g: New Features for Administrators 16 - 6
Purpose of SQL Tuning Response time : An online book seller application that hangs for three minutes after a customer updates the shopping cart Throughput : Three minute parallel query in a data warehouse that consumes all of the database host CPU, preventing other queries from running. In each case user response time is 3 minutes, but cause of the problem is different, and so is the tuning goal . Purpose of SQL Tuning A SQL statement becomes a problem when it fails to perform according to a predetermined and measurable standard. After you have identified the problem, a typical tuning session has one of the following goals: • Reduce user response time, which means decreasing the time between when a user issues a statement and receives a response • Improve throughput, which means using the least amount of resources necessary to process all rows accessed by a statement For a response time problem, consider an online book seller application that hangs for three minutes after a customer updates the shopping cart. Contrast with a three minute parallel query in a data warehouse that consumes all of the database host CPU, preventing other queries from running. In each case, the user response time is three minutes, but the cause of the problem is different, and so is the tuning goal. parallel query A query in which multiple processes work together simultaneously to run a single SQL query. By dividing the work among multiple processes, Oracle Database can run the statement more quickly. For example, four processes retrieve rows for four different quarters in a year instead of one process handling all four quarters by itself Oracle Database 11g: New Features for Administrators

7 Oracle Database 11g: New Features for Administrators 16 - 7
Working on the problem Where is time spent? First, you have to identify where time goes. For example, if a specific operation takes ten seconds, you have to find out which module or component most of these ten seconds are used up in. How is time spent? Once you know where the time goes, you have to find out how that time is spent. For example, you may find out that the component spends 4.2 seconds on CPU, 0.4 seconds doing disk I/O operations, and 5.1 seconds waiting for dequeuing a message coming from another component. How can time spent be reduced? Finally, it is time to find out how the operation can be made faster. To do so, it is essential to focus on the most time-consuming part of the processing. For example, if disk I/O operations take 4% of the overall processing time, it makes no sense to start optimizing them, even if they are very slow. Throughput : Three minute parallel query in a data warehouse that consumes all of the database host CPU, preventing other queries from running. In each case user response time is 3 minutes, but cause of the problem is different, and so is the tuning goal . Working the Problems Troubleshooting several problems at the same time is much more complex than troubleshooting a single problem. Therefore, whenever possible, you should work one problem at a time. Simply take the list of problems and go through them according to their priority level. Where is time spent? First, you have to identify where time goes. For example, if a specific operation takes ten seconds, you have to find out which module or component most of these ten seconds are used up in. How is time spent? Once you know where the time goes, you have to find out how that time is spent. For example, you may find out that the component spends 4.2 seconds on CPU, 0.4 seconds doing disk I/O operations, and 5.1 seconds waiting for dequeuing a message coming from another component. How can time spent be reduced? Finally, it is time to find out how the operation can be made faster. To do so, it is essential to focus on the most time-consuming part of the processing. For example, if disk I/O operations take 4% of the overall processing time, it makes no sense to start optimizing them, even if they are very slow. To find out where and how the time is spent, the analysis should start by collecting end-to-end performance data about the execution of the operation you are concerned with. This is essential because multitier architectures are currently the de facto standard in software development for applications needing a database like Oracle. In the simplest cases, at least two tiers (a.k.a. client/server) are implemented. Most of the time, there are three: presentation, logic, and data. Figure 1-5 shows a typical infrastructure used to deploy a web application. Frequently, for security or workload-management purposes, components are spread over multiple machines as well. typical web application consists of several components deployed on multiple systems Oracle Database 11g: New Features for Administrators

8 Oracle Database Tuning Tools
Oracle Database 12C Enterprise Edition Oracle Enterprise Manager Cloud Control Separate installation /licensing from DB Oracle Diagnostics Pack Oracle gives you all the “toys” , but be carefull License audits can be costly Purpose of SQL Tuning A SQL statement becomes a problem when it fails to perform according to a predetermined and measurable standard. After you have identified the problem, a typical tuning session has one of the following goals: • Reduce user response time, which means decreasing the time between when a user issues a statement and receives a response • Improve throughput, which means using the least amount of resources necessary to process all rows accessed by a statement For a response time problem, consider an online book seller application that hangs for three minutes after a customer updates the shopping cart. Contrast with a three minute parallel query in a data warehouse that consumes all of the database host CPU, preventing other queries from running. In each case, the user response time is three minutes, but the cause of the problem is different, and so is the tuning goal. parallel query A query in which multiple processes work together simultaneously to run a single SQL query. By dividing the work among multiple processes, Oracle Database can run the statement more quickly. For example, four processes retrieve rows for four different quarters in a year instead of one process handling all four quarters by itself Oracle Database 11g: New Features for Administrators

9 Troubleshooting poor performance
Solving database performance issues sometimes requires the use of operating system (OS) utilities. These tools often provide information that can help isolate database performance problems. Consider the following situations: You’re r • unning multiple databases and multiple applications on one server and want to use OS utilities to identify which process is consuming the most OS resources. This approach is invaluable when a process (that may or may not be related to a database) is consuming resources to the point of causing one or more of the databases on the box to perform poorly. • You need to verify if the database server is adequately sized for current application workload in terms of CPU, memory, storage I/O, and network bandwidth. • An analysis is needed to determine at what point the server will not be able to handle larger (future) workloads. • You’re leasing resources within a cloud and want to verify that you’re consuming the resources you’re paying for. • You’ve used database tools to identify system bottlenecks and want to double-check the analysis via OS tools. In these scenarios, to effectively analyze, tune, and troubleshoot, you’ll need to employ OS tools to identify resource-intensive processes. Furthermore, if you have multiple databases and applications running on one server, when troubleshooting performance issues, it’s often more efficient to first determine which database and process is consuming the most resources. Operating system utilities help pinpoint whether the bottleneck is CPU, memory, storage I/O, or a network issue. Once you’ve identified the OS identifier, you can then query the database to show any corresponding database processes and SQL statements. Take a look at Figure 6-1. This flowchart details the decision-making process and the relevant OS tools that a DBA steps through when diagnosing sluggish server performance. For example, when you’re dealing with performance problems, one common first task is to log on to the box and quickly check for disk space issues using OS utilities like df and du. A full mount point is a common cause of database unresponsiveness. After inspecting disk space issues, the next task is to use an OS utility such as vmstat, top, or ps to determine what type of bottleneck you have (CPU, memory, I/O, or network). After determining the type of bottleneck, the next step is to determine if a database process is causing the problem. The ps command is useful for displaying the process name and ID of the resource-consuming session. When you have multiple databases running on one box, you can determine which database is associated with the process from the process name. Once you have the process ID and associated database, you can then log on to the database and run SQL queries to determine if the process is associated with a SQL query. If the problem is SQL-related, then you can identify further details regarding the SQL query and where it might be tuned. Figure 6-1 encapsulates the difficulty of troubleshooting performance problems. Correctly pinpointing the cause of performance issues and recommending an efficient solution is often easier said than done. When trying to resolve issues, some paths result in relatively efficient and inexpensive solutions, such as terminating a runaway OS process, creating an index, or generating fresh statistics. Other decisions may lead you to conclude that you need to add expensive hardware or redesign the system. Your performance tuning conclusions can have long-lasting financial impact on your company and thus influence your ability to retain a job. Obviously you want to focus on the cause of a performance problem and not just address the symptom. If you can consistently identify the root cause of the performance issue and recommend an effective and inexpensive solution, this will greatly increase your value to your employer. The focus of this chapter is to provide detailed examples that show how to use Linux/Unix OS utilities to identify server performance issues. These utilities are invaluable for providing extra information used to diagnose performance issues outside of tools available within the database. Operating system utilities act as an extra set of eyes to help zero in on the cause of poor database performance. Oracle Database 11g: New Features for Administrators

10 Oracle Database 11g: New Features for Administrators 16 - 10
What Is a Cursor ? A cursor is a handle (a memory structure that enables a program to access a resource) that references a private SQL area with an associated shared SQL area. Although the handle is a client-side memory structure, it references a memory structure allocated by a server process that, in turn, references a memory structure stored in the SGA, and more precisely in the library cache. A cursor is a handle (that is, a memory structure that enables a program to access a resource) that references a private SQL area with an associated shared SQL area. As shown in Figure 2-1, although the handle is a client-side memory structure, it references a memory structure allocated by a server process that, in turn, references a memory structure stored in the SGA, and more precisely in the library cache. Figure 2-1. A cursor is a handle to a private SQL area with an associated shared SQL area A private SQL area stores data such as bind variable values and query execution state information. As its name suggests, a private SQL area belongs to a specific session. The session memory used to store private SQL areas is called user global area (UGA). A shared SQL area consists of two separate structures: the so-called parent cursor and child cursor. The key information stored in a parent cursor is the text of the SQL statement associated with the cursor. Simply put, the SQL statement specifies the processing to be performed. The key elements stored in a child cursor are the execution environment and the execution plan. These elements specify how the processing is carried out. A shared SQL area can be used by several sessions, and therefore it’s stored in the library cache. ■■Note In practice, the terms cursor and private/shared SQL area are used interchangeably. A cursor is a handle to a private SQL area with an associated shared SQL area Oracle Database 11g: New Features for Administrators

11 Oracle Database 11g: New Features for Administrators 16 - 11
What Is a Cursor ? Private SQL area : Stores data such as bind variable values and query execution state information. belongs to a specific session. The session memory used to store private SQL areas is called user global area (UGA). A shared SQL area : consists of 2 separate structures: parent cursor and child cursor. Parent cursor : stores text of the SQL statement associated with the cursor Child cursor : stores the execution environment and the execution plan. These elements specify how the processing is carried out. A shared SQL area can be used by several sessions, and therefore it’s stored in the library cache. A cursor is a handle (that is, a memory structure that enables a program to access a resource) that references a private SQL area with an associated shared SQL area. As shown in Figure 2-1, although the handle is a client-side memory structure, it references a memory structure allocated by a server process that, in turn, references a memory structure stored in the SGA, and more precisely in the library cache. Figure 2-1. A cursor is a handle to a private SQL area with an associated shared SQL area A private SQL area stores data such as bind variable values and query execution state information. As its name suggests, a private SQL area belongs to a specific session. The session memory used to store private SQL areas is called user global area (UGA). A shared SQL area consists of two separate structures: the so-called parent cursor and child cursor. The key information stored in a parent cursor is the text of the SQL statement associated with the cursor. Simply put, the SQL statement specifies the processing to be performed. The key elements stored in a child cursor are the execution environment and the execution plan. These elements specify how the processing is carried out. A shared SQL area can be used by several sessions, and therefore it’s stored in the library cache. ■■Note In practice, the terms cursor and private/shared SQL area are used interchangeably. Oracle Database 11g: New Features for Administrators

12 Oracle Database 11g: New Features for Administrators 16 - 12
What Is a Cursor ? A PL/SQL block taking advantage of an implicit cursor; basically, the PL/SQL block delegates the control over the cursor to the PL/SQL compiler: DECLARE l_ename emp.ename%TYPE := 'SCOTT'; l_empno emp.empno%TYPE; BEGIN SELECT empno INTO l_empno FROM emp WHERE ename = l_ename; dbms_output.put_line(l_empno); END; Life Cycle of a Cursor Having a good understanding of the life cycle of cursors is required knowledge for optimizing applications that execute SQL statements. The following are the steps carried out during the processing of a cursor: 1. Open cursor: A private SQL area is allocated in the UGA of the session used to open the cursor. A client-side handle referencing the private SQL area is also allocated. Note that no SQL statement is associated with the cursor yet. 2. Parse cursor: A shared SQL area containing the parsed representation of the SQL statement associated to it and its execution plan (which describes how the SQL engine will execute the SQL statement) is generated and loaded in the SGA, specifically into the library cache. The private SQL area is updated to store a reference to the shared SQL area. (The next section describes parsing in more detail.) 3. Define output variables: If the SQL statement returns data, the variables receiving it must be defined. This is necessary not only for queries but also for DELETE, INSERT, and UPDATE statements that use the RETURNING clause. 4. Bind input variables: If the SQL statement uses bind variables, their values must be provided. No check is performed during the binding. If invalid data is passed, a runtime error will be raised during the execution. 5. Execute cursor: The SQL statement is executed. But be careful—the database engine doesn’t always do anything significant during this phase. In fact, for many types of queries, the real processing is usually delayed to the fetch phase. 6. Fetch cursor: If the SQL statement returns data, this step retrieves it. Especially for queries, this step is where most of the processing is performed. In the case of queries, the result set might be partially fetched. In other words, the cursor might be closed before fetching all the rows. 7. Close cursor: The resources associated with the handle and the private SQL area are freed and consequently made available for other cursors. The shared SQL area in the library cache isn’t removed. It remains there in the hope of being reused in the future. To better understand this process, it’s best to think about each step being executed separately in the order shown by Figure 2-2. In practice, though, different optimization techniques are applied to speed up processing. For example, bind variable peeking requires that the generation of the execution plan is delayed until the value of the bind variables is known. Depending on the programming environment or techniques you’re using, the different steps depicted in Figure 2-2 may be implicitly or explicitly executed. To make the difference clear, take a look at the following two PL/SQL blocks that are available in the lifecycle.sql script. Both have the same purpose (reading one row from the emp table), but they’re coded in a very different way. The first is a PL/SQL block using the dbms_sql package to explicitly code every step shown in Figure 2-2: DECLARE l_ename emp.ename%TYPE := 'SCOTT'; l_empno emp.empno%TYPE; l_cursor INTEGER; l_retval INTEGER; BEGIN l_cursor := dbms_sql.open_cursor; dbms_sql.parse(l_cursor, 'SELECT empno FROM emp WHERE ename = :ename', 1); dbms_sql.define_column(l_cursor, 1, l_empno); dbms_sql.bind_variable(l_cursor, ':ename', l_ename); l_retval := dbms_sql.execute(l_cursor); IF dbms_sql.fetch_rows(l_cursor) > 0 THEN dbms_sql.column_value(l_cursor, 1, l_empno); dbms_output.put_line(l_empno); END IF; dbms_sql.close_cursor(l_cursor); END; The second is a PL/SQL block taking advantage of an implicit cursor; basically, the PL/SQL block delegates the control over the cursor to the PL/SQL compiler: SELECT empno INTO l_empno FROM emp WHERE ename = l_ename; Most of the time, what the compiler does is fine. In fact, internally, the compiler generates code that is similar to that shown in the first PL/SQL block. However, sometimes you need more control over the different steps performed during processing. Thus, you can’t always use an implicit cursor. For example, between the two PL/SQL blocks, there’s a slight but important difference. Independently of how many rows the query returns, the first block doesn’t generate an exception. Instead, the second block generates an exception if zero or several rows are returned. Oracle Database 11g: New Features for Administrators

13 What Is a Cursor ? Once stored in the library cache, parent and child cursors are externalized through the v$sqlarea and v$sql views, respectively. In most cases , cursors are identified with two columns: sql_id and child_number. sql_id column identifies parent cursors. Both values together identify child cursors.. How Parsing Works The previous section describes the life cycle of cursors, and this section focuses on the parse phase. The steps carried out during this phase, as shown in Figure 2-3, are the following: 1. Include VPD predicates: If Virtual Private Database (VPD, formerly known as row-level security) is in use and active for one of the tables referenced in the parsed SQL statement, the predicates generated by the security policies are included in its WHERE clause. 2. Check syntax, semantics, and access rights: This step makes sure not only that the SQL statement is correctly written but also that all objects referenced by the SQL statement exist and the user parsing it has the necessary privileges to access them. 3. Store parent cursor in a shared SQL area: Whenever a shareable parent cursor isn’t yet available, some memory is allocated from the library cache, and a new parent cursor is stored inside it. 4. Generate execution plan: During this phase, the query optimizer produces an execution plan for the parsed SQL statement. (This topic is fully described in Chapter 6.) 5. Store child cursor in a shared SQL area: Some memory is allocated, and the shareable child cursor is stored inside it and associated with its parent cursor. Once stored in the library cache, parent and child cursors are externalized through the v$sqlarea and v$sql views, respectively. Strictly speaking, the identifier of a cursor is its memory address, both for the parent and the child. But in most situations, cursors are identified with two columns: sql_id and child_number. The sql_id column identifies parent cursors. Both values together identify child cursors. There are cases, though, where the two values together aren’t sufficient to identify a cursor. In fact, depending on the version1, parent cursors with many children are obsoleted and replaced by new ones. As a result, the address column is also required to identify a cursor. When shareable parent and child cursors are available and, consequently, only the first two operations are carried out, the parse is called a soft parse. When all operations are carried out, it’s called a hard parse. From a performance point of view, you should avoid hard parses as much as possible. This is precisely why the database engine stores shareable cursors in the library cache. In this way, every process belonging to the instance might be able to reuse them. There are two reasons why hard parses should be avoided. The first is that the generation of an execution plan is a very CPU-intensive operation. The second is that memory in the shared pool is needed for storing the parent and child cursors in the library cache. Because the shared pool is shared over all sessions, memory allocations in the shared pool are serialized. For that purpose, one of the latches protecting the shared pool (known as shared pool latches) must be obtained to allocate the memory needed for both the parent and child cursors. Because of this serialization, an application causing a lot of hard parses is likely to experience contention for shared pool latches. Even if the impact of soft parses is much lower than that of hard parses, avoiding soft parses is desirable as well because they’re also subject to some serialization. In fact, the database engine must guarantee that the memory structures it accesses aren’t modified while it’s searching for a shareable cursor. The actual implementation depends on the version: through version one of the library cache latches must be obtained, but from version onward Oracle started replacing the library cache latches with mutexes, and as of version 11.1 only mutexes are used for that purpose. In summary, you should avoid soft and hard parses as much as possible because they inhibit the scalability of applications. (Chapter 12 covers this topic in detail.) Steps carried out during the parse phase Oracle Database 11g: New Features for Administrators

14 Soft & Hard Parse Hard parses should be avoided!
When shareable parent and child cursors are available and, consequently, only the first two operations are carried out, the parse is called a soft parse. When all operations are carried out, it’s called a hard parse. From a performance point of view, you should avoid hard parses as much as possible. This is precisely why the database engine stores shareable cursors in the library cache. In this way, every process belonging to the instance might be able to reuse them. Hard parses should be avoided! Generation of an execution plan is a very CPU-intensive operation. Memory in the shared pool is needed for storing the parent and child cursors in the library cache. Because the shared pool is shared over all sessions, memory allocations in the shared pool are serialized(with mutexes) How Parsing Works The previous section describes the life cycle of cursors, and this section focuses on the parse phase. The steps carried out during this phase, as shown in Figure 2-3, are the following: 1. Include VPD predicates: If Virtual Private Database (VPD, formerly known as row-level security) is in use and active for one of the tables referenced in the parsed SQL statement, the predicates generated by the security policies are included in its WHERE clause. 2. Check syntax, semantics, and access rights: This step makes sure not only that the SQL statement is correctly written but also that all objects referenced by the SQL statement exist and the user parsing it has the necessary privileges to access them. 3. Store parent cursor in a shared SQL area: Whenever a shareable parent cursor isn’t yet available, some memory is allocated from the library cache, and a new parent cursor is stored inside it. 4. Generate execution plan: During this phase, the query optimizer produces an execution plan for the parsed SQL statement. (This topic is fully described in Chapter 6.) 5. Store child cursor in a shared SQL area: Some memory is allocated, and the shareable child cursor is stored inside it and associated with its parent cursor. Once stored in the library cache, parent and child cursors are externalized through the v$sqlarea and v$sql views, respectively. Strictly speaking, the identifier of a cursor is its memory address, both for the parent and the child. But in most situations, cursors are identified with two columns: sql_id and child_number. The sql_id column identifies parent cursors. Both values together identify child cursors. There are cases, though, where the two values together aren’t sufficient to identify a cursor. In fact, depending on the version1, parent cursors with many children are obsoleted and replaced by new ones. As a result, the address column is also required to identify a cursor. When shareable parent and child cursors are available and, consequently, only the first two operations are carried out, the parse is called a soft parse. When all operations are carried out, it’s called a hard parse. From a performance point of view, you should avoid hard parses as much as possible. This is precisely why the database engine stores shareable cursors in the library cache. In this way, every process belonging to the instance might be able to reuse them. There are two reasons why hard parses should be avoided. The first is that the generation of an execution plan is a very CPU-intensive operation. The second is that memory in the shared pool is needed for storing the parent and child cursors in the library cache. Because the shared pool is shared over all sessions, memory allocations in the shared pool are serialized. For that purpose, one of the latches protecting the shared pool (known as shared pool latches) must be obtained to allocate the memory needed for both the parent and child cursors. Because of this serialization, an application causing a lot of hard parses is likely to experience contention for shared pool latches. Even if the impact of soft parses is much lower than that of hard parses, avoiding soft parses is desirable as well because they’re also subject to some serialization. In fact, the database engine must guarantee that the memory structures it accesses aren’t modified while it’s searching for a shareable cursor. The actual implementation depends on the version: through version one of the library cache latches must be obtained, but from version onward Oracle started replacing the library cache latches with mutexes, and as of version 11.1 only mutexes are used for that purpose. In summary, you should avoid soft and hard parses as much as possible because they inhibit the scalability of applications. (Chapter 12 covers this topic in detail.) Steps carried out during the parse phase Oracle Database 11g: New Features for Administrators

15 Oracle Database 11g: New Features for Administrators 16 - 15
Shareable Cursors SQL> SELECT * FROM t WHERE n = 1234; SQL> select * from t where n = 1234; SQL> SELECT * FROM t WHERE n=1234; SQL> SELECT * FROM t WHERE n = 1234 SQL> SELECT sql_id, sql_text, executions 2 FROM v$sqlarea 3 WHERE sql_text LIKE '%1234'; SQL_ID SQL_TEXT EXECUTIONS 2254m1487jg50 select * from t where n = g9y3jtp6ru4cb SELECT * FROM t WHERE n = 7n8p5s2udfdsn SELECT * FROM t WHERE n= Shareable Cursors The result of a parse operation is a parent cursor and a child cursor stored in a shared SQL area inside the library cache. Obviously, the aim of storing them in a shared memory area is to allow their reutilization and thereby avoid hard parses. Therefore, it’s necessary to discuss in what situations it’s possible to reuse a parent or child cursor. To illustrate how sharing parent and child cursors works, this section covers three examples. The purpose of the first example, based on the sharable_parent_cursors.sql script, is to show a case where the parent cursor can’t be shared. The key information related to a parent cursor is the text of a SQL statement. Therefore, in general, several SQL statements share the same parent cursor if their text is exactly the same. This is the most essential requirement. There’s, however, an exception to this when cursor sharing is enabled. In fact, when cursor sharing is enabled, the database engine can automatically replace the literals used in SQL statements with bind variables. Hence, the text of the SQL statements received by the database engine is modified before being stored in parent cursors. (Chapter 12 covers cursor sharing in more detail.) In the first example, four SQL statements are executed. Two have the same text. Two others differ only because of lowercase and uppercase letters or blanks: SQL> SELECT * FROM t WHERE n = 1234; SQL> select * from t where n = 1234; SQL> SELECT * FROM t WHERE n=1234; The aim of the second example, based on the sharable_child_cursors.sql script, is to show a case where the parent cursor, but not the child cursor, can be shared. The key information related to a child cursor is an execution plan and the execution environment related to it. As a result, several SQL statements are able to share the same child cursor only if they share the same parent cursor and their execution environments are compatible. Oracle Database 11g: New Features for Administrators

16 Oracle Database 11g: New Features for Administrators 16 - 16
Shareable Cursors SQL> ALTER SESSION SET optimizer_mode = all_rows; SQL> SELECT count(*) FROM t; COUNT(*) 1000 SQL> ALTER SESSION SET optimizer_mode = first_rows_1; SQL> SELECT sql_id, child_number, optimizer_mode, plan_hash_value 2 FROM v$sql 3 WHERE sql_text = 'SELECT count(*) FROM t'; SQL_ID CHILD_NUMBER OPTIMIZER_MODE PLAN_HASH_VALUE 5tjqf7sx5dzmj 0 ALL_ROWS 5tjqf7sx5dzmj 1 FIRST_ROWS single parent cursor (5tjqf7sx5dzmj) and two child cursors (0 and 1) New child cursor was created because of a new individual execution environment aim of the second example, based on the sharable_child_cursors.sql script, is to show a case where the parent cursor, but not the child cursor, can be shared. The key information related to a child cursor is an execution plan and the execution environment related to it. As a result, several SQL statements are able to share the same child cursor only if they share the same parent cursor and their execution environments are compatible. To illustrate, the same SQL statement is executed with two different values of the optimizer_mode initialization parameter: SQL> ALTER SESSION SET optimizer_mode = all_rows; SQL> SELECT count(*) FROM t; COUNT(*) 1000 SQL> ALTER SESSION SET optimizer_mode = first_rows_1; The result is that a single parent cursor (5tjqf7sx5dzmj) and two child cursors (0 and 1) are created. It’s also essential to note that both child cursors have the same execution plan (the plan_hash_value column has the same value). This shows very well that a new child cursor was created because of a new individual execution environment and not because another execution plan was generated: SQL> SELECT sql_id, child_number, optimizer_mode, plan_hash_value 2 FROM v$sql 3 WHERE sql_text = 'SELECT count(*) FROM t'; SQL_ID CHILD_NUMBER OPTIMIZER_MODE PLAN_HASH_VALUE 5tjqf7sx5dzmj 0 ALL_ROWS 5tjqf7sx5dzmj 1 FIRST_ROWS Oracle Database 11g: New Features for Administrators

17 Oracle Database 11g: New Features for Administrators 16 - 17
Bind Variables SQL> VARIABLE n NUMBER SQL> VARIABLE v VARCHAR2(32) SQL> EXECUTE :n := 1; :v := 'Helicon'; SQL> INSERT INTO t (n, v) VALUES (:n, :v); SQL> EXECUTE :n := 2; :v := 'Trantor'; SQL> EXECUTE :n := 3; :v := 'Kalgan'; SQL> SELECT sql_id, child_number, executions 2 FROM v$sql 3 WHERE sql_text = 'INSERT INTO t (n, v) VALUES (:n, :v)'; SQL_ID CHILD_NUMBER EXECUTIONS cvmu7dwnvxwj 0 3 The advantage of bind variables for performance is that they allow the sharing of parent cursors in the library cache and that way avoid hard parses and the overhead associated with them. Example : three INSERT statements that, thanks to bind variables, share the same cursor in the library cache: Bind Variables Bind variables impact applications in three ways. First, from a development point of view, they make programming either easier or more difficult (or more precisely, more or less code must be written). In this case, the effect depends on the application programming interface used to execute the SQL statements. For example, if you’re programming PL/SQL code, it’s easier to execute them with bind variables. On the other hand, if you’re programming in Java with JDBC, it’s easier to execute SQL statements without bind variables. Second, from a security point of view, bind variables mitigate the risk of SQL injection. Third, from a performance point of view, bind variables introduce both an advantage and a disadvantage. Advantage The advantage of bind variables for performance is that they allow the sharing of parent cursors in the library cache and that way avoid hard parses and the overhead associated with them. The following example, which is an excerpt of the output generated by the bind_variables_graduation.sql script, shows three INSERT statements that, thanks to bind variables, share the same cursor in the library cache: SQL> VARIABLE n NUMBER SQL> VARIABLE v VARCHAR2(32) SQL> EXECUTE :n := 1; :v := 'Helicon'; SQL> INSERT INTO t (n, v) VALUES (:n, :v); SQL> EXECUTE :n := 2; :v := 'Trantor'; SQL> EXECUTE :n := 3; :v := 'Kalgan'; SQL> SELECT sql_id, child_number, executions 2 FROM v$sql 3 WHERE sql_text = 'INSERT INTO t (n, v) VALUES (:n, :v)'; SQL_ID CHILD_NUMBER EXECUTIONS 6cvmu7dwnvxwj 0 3 There are, however, situations where several child cursors are created even with bind variables. The following example shows such a case. Notice that the INSERT statement is the same as in the previous example. Only the maximum size of the VARCHAR2 variable has changed (from 32 to 33): SQL> VARIABLE v VARCHAR2(33) SQL> EXECUTE :n := 4; :v := 'Terminus'; 6cvmu7dwnvxwj 1 1 Oracle Database 11g: New Features for Administrators

18 Oracle Database 11g: New Features for Administrators 16 - 18
SQL Processing SQL Processing Query Optimizer Concepts Query Transformations Purpose of SQL Tuning A SQL statement becomes a problem when it fails to perform according to a predetermined and measurable standard. After you have identified the problem, a typical tuning session has one of the following goals: • Reduce user response time, which means decreasing the time between when a user issues a statement and receives a response • Improve throughput, which means using the least amount of resources necessary to process all rows accessed by a statement For a response time problem, consider an online book seller application that hangs for three minutes after a customer updates the shopping cart. Contrast with a three minute parallel query in a data warehouse that consumes all of the database host CPU, preventing other queries from running. In each case, the user response time is three minutes, but the cause of the problem is different, and so is the tuning goal. parallel query A query in which multiple processes work together simultaneously to run a single SQL query. By dividing the work among multiple processes, Oracle Database can run the statement more quickly. For example, four processes retrieve rows for four different quarters in a year instead of one process handling all four quarters by itself Oracle Database 11g: New Features for Administrators

19 Oracle Database 11g: New Features for Administrators 16 - 19
Stages SQL Processing SQL Parsing Syntax Check Semantic Check Shared Pool Check When an application issues a SQL statement, the application makes a parse call to the database to prepare the statement for execution. The parse call opens or creates a cursor, which is a handle for the session-specific private SQL area that holds a parsed SQL statement and other processing information. The cursor and private SQL area are in the program global area (PGA). Syntax Check SQL> SELECT * FORM employees; SELECT * FORM employees * ERROR at line 1: ORA-00923: FROM keyword not found where expected Semantic Check SQL> SELECT * FROM nonexistent_table; SELECT * FROM nonexistent_table * ERROR at line 1: ORA-00942: table or view does not exist About SQL Processing SQL processing is the parsing, optimization, row source generation, and execution of a SQL statement. Depending on the statement, the database may omit some of these stages. Figure 3-1 depicts the general stages of SQL processing. SQL Parsing The first stage of SQL processing is parsing. This stage involves separating the pieces of a SQL statement into a data structure that other routines can process. The database parses a statement when instructed by the application, which means that only the application, and not the database itself, can reduce the number of parses. When an application issues a SQL statement, the application makes a parse call to the database to prepare the statement for execution. The parse call opens or creates a cursor, which is a handle for the session-specific private SQL area that holds a parsed SQL statement and other processing information. The cursor and private SQL area are in the program global area (PGA). During the parse call, the database performs the following checks: • Syntax Check • Semantic Check • Shared Pool Check Oracle Database 11g: New Features for Administrators

20 Oracle Database 11g: New Features for Administrators 16 - 20
Shared Pool Check SGA Shared . Components : Shared pool : Caches the most recently used SQL statements that have been issued by database users Database buffer cache : Caches the data that has been most recently accessed by database users Redo log buffer : Stores transaction information for recovery purposes program global area (PGA) Not shared. Components : SQL Work Area used for memory-intensive operations such as sorting or building a hash table during join operations. Private SQL Area Holds information about SQL statement and bind variable values. Oracle Memory Structures SGA The SGA is a shared memory area. All the users of the database share the information maintained in this area. Oracle allocates memory for the SGA when the instance is started and de-allocates it when the instance is shut down. The SGA consists of three mandatory components and four optional components. Table 8.2 describes the required componen Shared pool : Caches the most recently used SQL statements that have been issued by database users Database buffer cache : Caches the data that has been most recently accessed by database users Redo log buffer : Stores transaction information for recovery purposes program global area (PGA) In addition to the user and server processes that are associated with each user connection, an additional memory structure called the program global area (PGA) is also created for each server process. The PGA stores user-specific session information such as bind variables and session variables. Every server process on the server has a PGA memory area. Figure 8.7 shows the relationship between a user process, server processes, and the PGA. PGA memory is not shared. Each server process has a PGA associated with it and is exclusive. As a DBA, you set the total memory that can be allocated to all the PGA memory allocated to all server and background processes. The components of PGA are SQL Work Area Area used for memory-intensive operations such as sorting or building a hash table during join operations. Private SQL Area Holds information about SQL statement and bind variable values. The PGA can be configured to manage automatically by setting the database parameter PGA_AGGREGATE_TARGET. Oracle then contains the total amount of PGA memory allocated across all database server processes and background processes within this target. The server process communicates with the Oracle instance on behalf of the user. The Oracle instance is examined in the next section. Shared Pool Check During the parse, the database performs a shared pool check to determine whether it can skip resource-intensive steps of statement processing. To this end, the database uses a hashing algorithm to generate a hash value for every SQL statement. The statement hash value is the SQL ID shown in V$SQL.SQL_ID. This hash value is deterministic within a version of Oracle Database, so the same statement in a single instance or in different instances has the same SQL ID. When a user submits a SQL statement, the database searches the shared SQL area to see if an existing parsed statement has the same hash value. The hash value of a SQL statement is distinct from the following values: • Memory address for the statement Oracle Database uses the SQL ID to perform a keyed read in a lookup table. In this way, the database obtains possible memory addresses of the statement. • Hash value of an execution plan for the statement A SQL statement can have multiple plans in the shared pool. Typically, each plan has a different hash value. If the same SQL ID has multiple plan hash values, then the database knows that multiple plans exist for this SQL ID. Parse operations fall into the following categories, depending on the type of statement submitted and the result of the hash check: Hard parse If Oracle Database cannot reuse existing code, then it must build a new executable version of the application code. This operation is known as a hard parse, or a library cache miss. Note: The database always perform a hard parse of DDL. During the hard parse, the database accesses the library cache and data dictionary cache numerous times to check the data dictionary. When the database accesses these areas, it uses a serialization device called a latch on required objects so that their definition does not change. Latch contention increases statement execution time and decreases concurrency. • Soft parse A soft parse is any parse that is not a hard parse. If the submitted statement is the same as a reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. This reuse of code is also called a library cache hit. Soft parses can vary in how much work they perform. For example, configuring the session shared SQL area can sometimes reduce the amount of latching in the soft parses, making them "softer." In general, a soft parse is preferable to a hard parse because the database skips the optimization and row source generation steps, proceeding straight to execution. Figure 3-2 is a simplified representation of a shared pool check of an UPDATE statement in a dedicated server architecture. Oracle Database 11g: New Features for Administrators

21 Oracle Database 11g: New Features for Administrators 16 - 21
Shared Pool Check DB uses a hashing algorithm to generate a hash value for every SQL statement. The statement hash value is the SQL ID shown in V$SQL.SQL_ID. This hash value is deterministic within a version of Oracle Database. When a user submits a SQL statement, the database searches the shared SQL area to see if an existing parsed statement has the same hash value : Memory address for the statement : Oracle Database uses the SQL ID to perform a keyed read in a lookup table. In this way, the database obtains possible memory addresses of the statement. Hash value of an execution plan for the statement : A SQL statement can have multiple plans in the shared pool. Typically, each plan has a different hash value. If the same SQL ID has multiple plan hash values, then the database knows that multiple plans exist for this SQL ID. Shared Pool Check During the parse, the database performs a shared pool check to determine whether it can skip resource-intensive steps of statement processing. To this end, the database uses a hashing algorithm to generate a hash value for every SQL statement. The statement hash value is the SQL ID shown in V$SQL.SQL_ID. This hash value is deterministic within a version of Oracle Database, so the same statement in a single instance or in different instances has the same SQL ID. When a user submits a SQL statement, the database searches the shared SQL area to see if an existing parsed statement has the same hash value. The hash value of a SQL statement is distinct from the following values: • Memory address for the statement Oracle Database uses the SQL ID to perform a keyed read in a lookup table. In this way, the database obtains possible memory addresses of the statement. • Hash value of an execution plan for the statement A SQL statement can have multiple plans in the shared pool. Typically, each plan has a different hash value. If the same SQL ID has multiple plan hash values, then the database knows that multiple plans exist for this SQL ID. Parse operations fall into the following categories, depending on the type of statement submitted and the result of the hash check: Hard parse If Oracle Database cannot reuse existing code, then it must build a new executable version of the application code. This operation is known as a hard parse, or a library cache miss. Note: The database always perform a hard parse of DDL. During the hard parse, the database accesses the library cache and data dictionary cache numerous times to check the data dictionary. When the database accesses these areas, it uses a serialization device called a latch on required objects so that their definition does not change. Latch contention increases statement execution time and decreases concurrency. • Soft parse A soft parse is any parse that is not a hard parse. If the submitted statement is the same as a reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. This reuse of code is also called a library cache hit. Soft parses can vary in how much work they perform. For example, configuring the session shared SQL area can sometimes reduce the amount of latching in the soft parses, making them "softer." In general, a soft parse is preferable to a hard parse because the database skips the optimization and row source generation steps, proceeding straight to execution. Figure 3-2 is a simplified representation of a shared pool check of an UPDATE statement in a dedicated server architecture. Oracle Database 11g: New Features for Administrators

22 Oracle Database 11g: New Features for Administrators 16 - 22
Shared Pool Check Parse Categories : 1. Hard parse/ library cache miss If Oracle Database cannot reuse existing code, then it must build a new executable version of the application code. DB always perform a hard parse of DDL. During the hard parse, the database accesses the library cache and data dictionary cache numerous times to check the data dictionary. When the database accesses these areas, it uses a serialization device called a latch on required objects so that their definition does not change. Latch contention increases statement execution time and decreases concurrency. 2. Soft parse/ a library cache hit Any parse that is not a hard parse. If the submitted statement is the same as a reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. Soft parse is preferable to a hard parse because the database skips the optimization and row source generation steps, proceeding straight to execution. Shared Pool Check During the parse, the database performs a shared pool check to determine whether it can skip resource-intensive steps of statement processing. To this end, the database uses a hashing algorithm to generate a hash value for every SQL statement. The statement hash value is the SQL ID shown in V$SQL.SQL_ID. This hash value is deterministic within a version of Oracle Database, so the same statement in a single instance or in different instances has the same SQL ID. When a user submits a SQL statement, the database searches the shared SQL area to see if an existing parsed statement has the same hash value. The hash value of a SQL statement is distinct from the following values: • Memory address for the statement Oracle Database uses the SQL ID to perform a keyed read in a lookup table. In this way, the database obtains possible memory addresses of the statement. • Hash value of an execution plan for the statement A SQL statement can have multiple plans in the shared pool. Typically, each plan has a different hash value. If the same SQL ID has multiple plan hash values, then the database knows that multiple plans exist for this SQL ID. Parse operations fall into the following categories, depending on the type of statement submitted and the result of the hash check: Hard parse If Oracle Database cannot reuse existing code, then it must build a new executable version of the application code. This operation is known as a hard parse, or a library cache miss. Note: The database always perform a hard parse of DDL. During the hard parse, the database accesses the library cache and data dictionary cache numerous times to check the data dictionary. When the database accesses these areas, it uses a serialization device called a latch on required objects so that their definition does not change. Latch contention increases statement execution time and decreases concurrency. • Soft parse A soft parse is any parse that is not a hard parse. If the submitted statement is the same as a reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. This reuse of code is also called a library cache hit. Soft parses can vary in how much work they perform. For example, configuring the session shared SQL area can sometimes reduce the amount of latching in the soft parses, making them "softer." In general, a soft parse is preferable to a hard parse because the database skips the optimization and row source generation steps, proceeding straight to execution. Figure 3-2 is a simplified representation of a shared pool check of an UPDATE statement in a dedicated server architecture. Oracle Database 11g: New Features for Administrators

23 Query Optimizer Concepts
Purpose of the Query Optimizer Cost-Based Optimization Execution Plans This chapter describes the most important concepts relating to the query optimizer. This chapter contains the following topics: • Introduction to the Query Optimizer • About Optimizer Components • About Automatic Tuning Optimizer • About Adaptive Query Optimization • About Optimizer Management of SQL Plan Baselines Introduction to the Query Optimizer The query optimizer (called simply the optimizer) is built-in database software that determines the most efficient method for a SQL statement to access requested data. This section contains the following topics: • Purpose of the Query Optimizer • Cost-Based Optimization • Execution Plans Oracle Database 11g: New Features for Administrators

24 Purpose of the Query Optimizer
The optimizer attempts to generate the best execution plan for a SQL statement. The best execution plan is defined as the plan with the lowest cost among all considered candidate plans. The cost computation accounts for factors of query execution such as I/O, CPU, and communication. The best method of execution depends on myriad conditions including how the query is written, the size of the data set, the layout of the data, and which access structures exist. The optimizer determines the best plan for a SQL statement by examining multiple access methods, such as full table scan or index scans, and different join methods such as nested loops and hash joins. Because the database has many internal statistics and tools at its disposal, the optimizer is usually in a better position than the user to determine the best method of statement execution. For this reason, all SQL statements use the optimizer. Consider a user who queries records for employees who are managers. If the database statistics indicate that 80% of employees are managers, then the optimizer may decide that a full table scan is most efficient. However, if statistics indicate that few employees are managers, then reading an index followed by a table access by rowid may be more efficient than a full table scan. Purpose of the Query Optimizer The optimizer attempts to generate the best execution plan for a SQL statement. The best execution plan is defined as the plan with the lowest cost among all considered candidate plans. The cost computation accounts for factors of query execution such as I/O, CPU, and communication. The best method of execution depends on myriad conditions including how the query is written, the size of the data set, the layout of the data, and which access structures exist. The optimizer determines the best plan for a SQL statement by examining multiple access methods, such as full table scan or index scans, and different join methods such as nested loops and hash joins. Because the database has many internal statistics and tools at its disposal, the optimizer is usually in a better position than the user to determine the best method of statement execution. For this reason, all SQL statements use the optimizer. Consider a user who queries records for employees who are managers. If the database statistics indicate that 80% of employees are managers, then the optimizer may decide that a full table scan is most efficient. However, if statistics indicate that few employees are managers, then reading an index followed by a table access by rowid may be more efficient than a full table scan. Oracle Database 11g: New Features for Administrators

25 Cost-Based Optimization
The overall process of choosing the most efficient means of executing a SQL statement SQL is a nonprocedural language, so the optimizer is free to merge, reorganize, and process in any order. Based on statistics collected about the accessed data. Factors considered by the optimizer include: System resources, which includes I/O, CPU, and memory Number of rows returned Size of the initial data sets Cost-Based Optimization Query optimization is the overall process of choosing the most efficient means of executing a SQL statement. SQL is a nonprocedural language, so the optimizer is free to merge, reorganize, and process in any order. The database optimizes each SQL statement based on statistics collected about the accessed data. When generating execution plans, the optimizer considers different access paths and join methods. Factors considered by the optimizer include: • System resources, which includes I/O, CPU, and memory • Number of rows returned • Size of the initial data sets The cost is a number that represents the estimated resource usage for an execution plan. The optimizer’s cost model accounts for the I/O, CPU, and network resources that the database requires to execute the query. The optimizer assigns a cost to each possible plan, and then chooses the plan with the lowest cost. For this reason, the optimizer is sometimes called the cost-based optimizer (CBO) to contrast it with the legacy rule-based optimizer (RBO). Note: The optimizer may not make the same decisions from one version of Oracle Database to the next. In recent versions, the optimizer might make different decision because better information is available and more optimizer transformations are possible. Oracle Database 11g: New Features for Administrators

26 Cost-Based Optimization
The cost is a number that represents the estimated resource usage for an execution plan. The optimizer’s cost model accounts for the I/O, CPU, and network resources that the database requires to execute the query. The optimizer assigns a cost to each possible plan, and then chooses the plan with the lowest cost. For this reason, the optimizer is sometimes called the cost-based optimizer (CBO) to contrast it with the legacy rule-based optimizer (RBO). Cost-Based Optimization Query optimization is the overall process of choosing the most efficient means of executing a SQL statement. SQL is a nonprocedural language, so the optimizer is free to merge, reorganize, and process in any order. The database optimizes each SQL statement based on statistics collected about the accessed data. When generating execution plans, the optimizer considers different access paths and join methods. Factors considered by the optimizer include: • System resources, which includes I/O, CPU, and memory • Number of rows returned • Size of the initial data sets The cost is a number that represents the estimated resource usage for an execution plan. The optimizer’s cost model accounts for the I/O, CPU, and network resources that the database requires to execute the query. The optimizer assigns a cost to each possible plan, and then chooses the plan with the lowest cost. For this reason, the optimizer is sometimes called the cost-based optimizer (CBO) to contrast it with the legacy rule-based optimizer (RBO). Note: The optimizer may not make the same decisions from one version of Oracle Database to the next. In recent versions, the optimizer might make different decision because better information is available and more optimizer transformations are possible. Oracle Database 11g: New Features for Administrators

27 Oracle Database 11g: New Features for Administrators 16 - 27
Execution Plans An execution plan describes a recommended method of execution for a SQL statement. The plans shows the combination of the steps Oracle Database uses toexecute a SQL statement. Each step either retrieves rows of data physically from the database or prepares them for the user issuing the statement. An execution plan displays the cost of the entire plan, indicated on line 0, and each separate operation. The cost is an internal unit that the execution plan only displays to allow for plan comparisons. Thus, you cannot tune or change the cost value. Execution Plans An execution plan describes a recommended method of execution for a SQL statement. The plans shows the combination of the steps Oracle Database uses to execute a SQL statement. Each step either retrieves rows of data physically from the database or prepares them for the user issuing the statement. An execution plan displays the cost of the entire plan, indicated on line 0, and each separate operation. The cost is an internal unit that the execution plan only displays to allow for plan comparisons. Thus, you cannot tune or change the cost value. In Figure 4-1, the optimizer generates two possible execution plans for an input SQL statement, uses statistics to estimate their costs, compares their costs, and then chooses the plan with the lowest cost. Oracle Database 11g: New Features for Administrators

28 Oracle Database 11g: New Features for Administrators 16 - 28
Query Blocks Each SELECT block in the original SQL statement is represented internally by a query block. A query block can be a top-level statement, subquery, or unmerged Ex : SQL statement consists of two query blocks SELECT first_name, last_name FROM hr.employees WHERE department_id IN (SELECT department_id FROM hr.departments WHERE location_id = 1800); Query Blocks As shown in Figure 4-1, the input to the optimizer is a parsed representation of a SQL statement. Each SELECT block in the original SQL statement is represented internally by a query block. A query block can be a top-level statement, subquery, or unmerged view (see “View Merging”). Example 4-1 Query Blocks The following SQL statement consists of two query blocks. The subquery in parentheses is the inner query block. The outer query block, which is the rest of the SQL statement, retrieves names of employees in the departments whose IDs were supplied by the subquery. The query form determines how query blocks are interrelated. SELECT first_name, last_name FROM hr.employees WHERE department_id IN (SELECT department_id FROM hr.departments WHERE location_id = 1800); Oracle Database 11g: New Features for Administrators

29 Oracle Database 11g: New Features for Administrators 16 - 29
Query Subplans For each query block, the optimizer generates a query subplan The database optimizes query blocks separately from the bottom up. Thus, the database optimizes the innermost query block first and generates a subplan for it, and then generates the outer query block representing the entire query. The number of possible plans for a query block is proportional to the number of objects in the FROM clause. This number rises exponentially with the number of objects. For example, the possible plans for a join of five tables are significantly higher than the possible plans for a join of two tables. Query Subplans For each query block, the optimizer generates a query subplan. The database optimizes query blocks separately from the bottom up. Thus, the database optimizes the innermost query block first and generates a subplan for it, and then generates the outer query block representing the entire query. The number of possible plans for a query block is proportional to the number of objects in the FROM clause. This number rises exponentially with the number of objects. For example, the possible plans for a join of five tables are significantly higher than the possible plans for a join of two tables. Oracle Database 11g: New Features for Administrators

30 Oracle Database 11g: New Features for Administrators 16 - 30
Optimizer Components About Optimizer Components The optimizer contains three main components, which are shown in Figure 4-2. A set of query blocks represents a parsed query, which is the input to the optimizer. The optimizer performs the following operations: Query transformer The optimizer determines whether it is helpful to change the form of the query so that the optimizer can generate a better execution plan. See “Query Transformer”. 2. Estimator The optimizer estimates the cost of each plan based on statistics in the data dictionary. See “Estimator”. 3. Plan Generator The optimizer compares the costs of plans and chooses the lowest-cost plan, known as the execution plan, to pass to the row source generator. See “Plan Generator” Oracle Database 11g: New Features for Administrators

31 Oracle Database 11g: New Features for Administrators 16 - 31
Query Transformer Query Transformer For some statements, the query transformer determines whether it is advantageous to rewrite the original SQL statement into a semantically equivalent SQL statement with a lower cost. When a viable alternative exists, the database calculates the cost of the alternatives separately and chooses the lowest-cost alternative. Query Transformations describes the different types of optimizer transformations. Figure 4-3 shows the query transformer rewriting an input query that uses OR into an output query that uses UNION ALL. Oracle Database 11g: New Features for Administrators

32 Oracle Database 11g: New Features for Administrators 16 - 32
Selectivity The selectivity represents a fraction of rows from a row set. The row set can be a base table, a view, or the result of a join. The selectivity is tied to a query predicate, such a last_name = 'Smith', or a combination of predicates, such as last_name = 'Smith' AND job_id = 'SH_CLERK'. Note: Selectivity is an internal calculation that is not visible in execution plans. Selectivity The selectivity represents a fraction of rows from a row set. The row set can be a base table, a view, or the result of a join. The selectivity is tied to a query predicate, such as last_name = 'Smith', or a combination of predicates, such as last_name = 'Smith' AND job_id = 'SH_CLERK'. Note: Selectivity is an internal calculation that is not visible in execution plans. A predicate filters a specific number of rows from a row set. Thus, the selectivity of a predicate indicates how many rows pass the predicate test. Selectivity ranges from 0.0 to 1.0. A selectivity of 0.0 means that no rows are selected from a row set, whereas a selectivity of 1.0 means that all rows are selected. A predicate becomes more selective as the value approaches 0.0 and less selective (or more unselective) as the value approaches 1.0. The optimizer estimates selectivity depending on whether statistics are available: • Statistics not available Depending on the value of the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter, the optimizer either uses dynamic statistics or an internal default value. The database uses different internal defaults depending on the predicate type. For example, the internal default for an equality predicate (last_name = 'Smith') is lower than for a range predicate (last_name > 'Smith') because an equality predicate is expected to return a smaller fraction of rows. • Statistics available When statistics are available, the estimator uses them to estimate selectivity. Assume there are 150 distinct employee last names. For an equality predicate last_name = 'Smith', selectivity is the reciprocal of the number n of distinct values of last_name, which in this example is .006 because the query selects rows that contain 1 out of 150 distinct values. If a histogram exists on the last_name column, then the estimator uses the histogram instead of the number of distinct values. The histogram captures the distribution of different values in a column, so it yields better selectivity estimates, especially for columns that have data skew. See Histograms . Oracle Database 11g: New Features for Administrators

33 Selectivity and Cardinality
cardinality = selectivity ×num_rows SQL> SELECT * FROM t; 10000 rows selected cardinality of the operation accessing the table is 2,601 selectivity is (2,601 rows returned out of 10,000): SQL> SELECT * FROM t WHERE n1 BETWEEN 6000 AND 7000; 2601 rows selected. Cardinality =0 => selectivity =0 SQL> SELECT * FROM t WHERE n1 = 19; no rows selected Purpose of SQL Tuning A SQL statement becomes a problem when it fails to perform according to a predetermined and measurable standard. After you have identified the problem, a typical tuning session has one of the following goals: • Reduce user response time, which means decreasing the time between when a user issues a statement and receives a response • Improve throughput, which means using the least amount of resources necessary to process all rows accessed by a statement For a response time problem, consider an online book seller application that hangs for three minutes after a customer updates the shopping cart. Contrast with a three minute parallel query in a data warehouse that consumes all of the database host CPU, preventing other queries from running. In each case, the user response time is three minutes, but the cause of the problem is different, and so is the tuning goal. parallel query A query in which multiple processes work together simultaneously to run a single SQL query. By dividing the work among multiple processes, Oracle Database can run the statement more quickly. For example, four processes retrieve rows for four different quarters in a year instead of one process handling all four quarters by itself Oracle Database 11g: New Features for Administrators

34 Selectivity and Cardinality
Execution plan contains at least one aggregate operation SQL> SELECT sum(n2) FROM t WHERE n1 BETWEEN 6000 AND 7000; SUM(N2) 70846 Executed following query find out how many rows are returned by the access operation and passed as input to the aggregate selectivity is (2,601/10,000) SQL> SELECT * FROM t WHERE n1 BETWEEN 6000 AND 7000; 2601 rows selected. In the previous three examples, the selectivity related to the operation accessing the table is computed by dividing the cardinality of the query by the number of rows stored in the table. This is possible because the three queries don’t contain joins or operations leading to aggregations. As soon as a query contains a GROUP BY clause or aggregate functions in the SELECT clause, the execution plan contains at least one aggregate operation. The following query illustrates this (note the presence of the sum aggregate function): SQL> SELECT sum(n2) FROM t WHERE n1 BETWEEN 6000 AND 7000; SUM(N2) 70846 1 row selected. In this type of situation, it’s not possible to compute the selectivity of the access operation based on the cardinality of the query (in this case, 1). Instead, a query like the following should be executed to find out how many rows are returned by the access operation and passed as input to the aggregate operation. Here, the cardinality of the access operation accessing the table is 2,601, and therefore the selectivity is (2,601/10,000): SQL> SELECT count(*) FROM t WHERE n1 BETWEEN 6000 AND 7000; COUNT(*) 2601 As you’ll see later, especially in Chapter 13, knowing the selectivity of an operation helps you determine what the most efficient access path is. Oracle Database 11g: New Features for Administrators

35 Oracle Database 11g: New Features for Administrators 16 - 35
Selectivity 0.0<=Selectivity < 1.0. selectivity = 0.0 means that no rows . selectivity of 1.0 means that all rows are selected. A predicate becomes more selective as the value approaches 0.0 and less selective (or more unselective) as the value approaches 1.0. Selectivity The selectivity represents a fraction of rows from a row set. The row set can be a base table, a view, or the result of a join. The selectivity is tied to a query predicate, such as last_name = 'Smith', or a combination of predicates, such as last_name = 'Smith' AND job_id = 'SH_CLERK'. Note: Selectivity is an internal calculation that is not visible in execution plans. A predicate filters a specific number of rows from a row set. Thus, the selectivity of a predicate indicates how many rows pass the predicate test. Selectivity ranges from 0.0 to 1.0. A selectivity of 0.0 means that no rows are selected from a row set, whereas a selectivity of 1.0 means that all rows are selected. A predicate becomes more selective as the value approaches 0.0 and less selective (or more unselective) as the value approaches 1.0. The optimizer estimates selectivity depending on whether statistics are available: • Statistics not available Depending on the value of the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter, the optimizer either uses dynamic statistics or an internal default value. The database uses different internal defaults depending on the predicate type. For example, the internal default for an equality predicate (last_name = 'Smith') is lower than for a range predicate (last_name > 'Smith') because an equality predicate is expected to return a smaller fraction of rows. • Statistics available When statistics are available, the estimator uses them to estimate selectivity. Assume there are 150 distinct employee last names. For an equality predicate last_name = 'Smith', selectivity is the reciprocal of the number n of distinct values of last_name, which in this example is .006 because the query selects rows that contain 1 out of 150 distinct values. If a histogram exists on the last_name column, then the estimator uses the histogram instead of the number of distinct values. The histogram captures the distribution of different values in a column, so it yields better selectivity estimates, especially for columns that have data skew. See Histograms . Oracle Database 11g: New Features for Administrators

36 Oracle Database 11g: New Features for Administrators 16 - 36
Selectivity Optimizer estimates selectivity depending on whether statistics are available: Statistics not available Depending on the value of the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter, the optimizer either uses dynamic statistics or an internal default value. DB uses different internal defaults depending on the predicate type. For example, the internal default for an equality predicate (last_name = 'Smith') is ower than for a range predicate (last_name > 'Smith') because an equality predicate is expected to return a smaller fraction of rows. Selectivity The selectivity represents a fraction of rows from a row set. The row set can be a base table, a view, or the result of a join. The selectivity is tied to a query predicate, such as last_name = 'Smith', or a combination of predicates, such as last_name = 'Smith' AND job_id = 'SH_CLERK'. Note: Selectivity is an internal calculation that is not visible in execution plans. A predicate filters a specific number of rows from a row set. Thus, the selectivity of a predicate indicates how many rows pass the predicate test. Selectivity ranges from 0.0 to 1.0. A selectivity of 0.0 means that no rows are selected from a row set, whereas a selectivity of 1.0 means that all rows are selected. A predicate becomes more selective as the value approaches 0.0 and less selective (or more unselective) as the value approaches 1.0. The optimizer estimates selectivity depending on whether statistics are available: • Statistics not available Depending on the value of the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter, the optimizer either uses dynamic statistics or an internal default value. The database uses different internal defaults depending on the predicate type. For example, the internal default for an equality predicate (last_name = 'Smith') is lower than for a range predicate (last_name > 'Smith') because an equality predicate is expected to return a smaller fraction of rows. • Statistics available When statistics are available, the estimator uses them to estimate selectivity. Assume there are 150 distinct employee last names. For an equality predicate last_name = 'Smith', selectivity is the reciprocal of the number n of distinct values of last_name, which in this example is .006 because the query selects rows that contain 1 out of 150 distinct values. If a histogram exists on the last_name column, then the estimator uses the histogram instead of the number of distinct values. The histogram captures the distribution of different values in a column, so it yields better selectivity estimates, especially for columns that have data skew. See Histograms . Oracle Database 11g: New Features for Administrators

37 Oracle Database 11g: New Features for Administrators 16 - 37
Cardinality Number of rows returned by each operation in an execution plan The optimizer determines the cardinality for each operation based on a complex set of formulas that use both table and column level statistics, or dynamic statistic With no histogram optimizer assumes a uniform distribution . Ex : SELECT first_name, last_name FROM employees WHERE salary='10200'; Employees table contains 107 rows. Current DB statistics indicate that the number of distinct values in the salary column is 58. Optimizer calculates the cardinality of the result set as 2, using the formula 107/58=1.84. Cardinality The cardinality is the number of rows returned by each operation in an execution plan. For example, if the optimizer estimate for the number of rows returned by a full table scan is 100, then the cardinality estimate for this operation is 100. The cardinality estimate appears in the Rows column of the execution plan. The optimizer determines the cardinality for each operation based on a complex set of formulas that use both table and column level statistics, or dynamic statistics, as input. The optimizer uses one of the simplest formulas when a single equality predicate appears in a single-table query, with no histogram. In this case, the optimizer assumes a uniform distribution and calculates the cardinality for the query by dividing the total number of rows in the table by the number of distinct values in the column used in the WHERE clause predicate. About Optimizer Components Query Optimizer Concepts 4-7 For example, user hr queries the employees table as follows: SELECT first_name, last_name FROM employees WHERE salary='10200'; The employees table contains 107 rows. The current database statistics indicate that the number of distinct values in the salary column is 58. Thus, the optimizer calculates the cardinality of the result set as 2, using the formula 107/58=1.84. Cardinality estimates must be as accurate as possible because they influence all aspects of the execution plan. Cardinality is important when the optimizer determines the cost of a join. For example, in a nested loops join of the employees and departments tables, the number of rows in employees determines how often the database must probe the departments table. Cardinality is also important for determining the cost of sorts. of sorts Oracle Database 11g: New Features for Administrators

38 Cardinality Importance
Cardinality estimates must be as accurate as possible because they influence all aspects of the execution plan. Cardinality is important when the optimizer determines the cost of a join For example, in a nested loops join of the employees and departments tables, the number of rows in employees determines how often the database must probe the departments table. Cardinality is also important for determining the cost of sorts . Cardinality The cardinality is the number of rows returned by each operation in an execution plan. For example, if the optimizer estimate for the number of rows returned by a full table scan is 100, then the cardinality estimate for this operation is 100. The cardinality estimate appears in the Rows column of the execution plan. The optimizer determines the cardinality for each operation based on a complex set of formulas that use both table and column level statistics, or dynamic statistics, as input. The optimizer uses one of the simplest formulas when a single equality predicate appears in a single-table query, with no histogram. In this case, the optimizer assumes a uniform distribution and calculates the cardinality for the query by dividing the total number of rows in the table by the number of distinct values in the column used in the WHERE clause predicate. About Optimizer Components Query Optimizer Concepts 4-7 For example, user hr queries the employees table as follows: SELECT first_name, last_name FROM employees WHERE salary='10200'; The employees table contains 107 rows. The current database statistics indicate that the number of distinct values in the salary column is 58. Thus, the optimizer calculates the cardinality of the result set as 2, using the formula 107/58=1.84. Cardinality estimates must be as accurate as possible because they influence all aspects of the execution plan. Cardinality is important when the optimizer determines the cost of a join. For example, in a nested loops join of the employees and departments tables, the number of rows in employees determines how often the database must probe the departments table. Cardinality is also important for determining the cost of sorts. of sorts Oracle Database 11g: New Features for Administrators

39 Oracle Database 11g: New Features for Administrators 16 - 39
Cost The optimizer cost model accounts for the I/O, CPU, and network resources that a query is predicted to use. An internal numeric measure that represents the estimated resource usage for a plan. The lower the cost, the more efficient the plan. You cannot tune or change it. Cost The optimizer cost model accounts for the I/O, CPU, and network resources that a query is predicted to use. The cost is an internal numeric measure that represents the estimated resource usage for a plan. The lower the cost, the more efficient the plan. The execution plan displays the cost of the entire plan, which is indicated on line 0, and each individual operation. For example, the following plan shows a cost of 14. EXPLAINED SQL STATEMENT: SELECT prod_category, AVG(amount_sold) FROM sales s, products p WHERE p.prod_id = s.prod_id GROUP BY prod_category Plan hash value: | Id | Operation | Name | Cost (%CPU)| | 0 | SELECT STATEMENT | | 14 (100)| | 1 | HASH GROUP BY | | 14 (22)| | 2 | HASH JOIN | | 13 (16)| | 3 | VIEW | index$_join$_002 | 7 (15)| | 4 | HASH JOIN | | | | 5 | INDEX FAST FULL SCAN| PRODUCTS_PK | 4 (0)| | 6 | INDEX FAST FULL SCAN| PRODUCTS_PROD_CAT_IX | 4 (0)| | 7 | PARTITION RANGE ALL | | 5 (0)| | 8 | TABLE ACCESS FULL | SALES | 5 (0)| The cost is an internal unit that you can use for plan comparisons. You cannot tune or change it. Oracle Database 11g: New Features for Administrators

40 Oracle Database 11g: New Features for Administrators 16 - 40
Cost of Access Path Table scan or fast full index scan DB reads multiple blocks from disk in a single I/O. Cost of the scan depends on the number of blocks to be scanned and the multiblock read count value. Index scan Cost of an index scan depends on the levels in the B-tree, the number of index leaf blocks to be scanned, and the number of rows to be fetched using the rowid in the index keys. The cost of fetching rows using rowids depends on the index clustering factor. Cost The optimizer cost model accounts for the I/O, CPU, and network resources that a query is predicted to use. The cost is an internal numeric measure that represents the estimated resource usage for a plan. The lower the cost, the more efficient the plan. The execution plan displays the cost of the entire plan, which is indicated on line 0, and each individual operation. For example, the following plan shows a cost of 14. EXPLAINED SQL STATEMENT: SELECT prod_category, AVG(amount_sold) FROM sales s, products p WHERE p.prod_id = s.prod_id GROUP BY prod_category Plan hash value: | Id | Operation | Name | Cost (%CPU)| | 0 | SELECT STATEMENT | | 14 (100)| | 1 | HASH GROUP BY | | 14 (22)| | 2 | HASH JOIN | | 13 (16)| | 3 | VIEW | index$_join$_002 | 7 (15)| | 4 | HASH JOIN | | | | 5 | INDEX FAST FULL SCAN| PRODUCTS_PK | 4 (0)| | 6 | INDEX FAST FULL SCAN| PRODUCTS_PROD_CAT_IX | 4 (0)| | 7 | PARTITION RANGE ALL | | 5 (0)| | 8 | TABLE ACCESS FULL | SALES | 5 (0)| The cost is an internal unit that you can use for plan comparisons. You cannot tune or change it. Oracle Database 11g: New Features for Administrators

41 Oracle Database 11g: New Features for Administrators 16 - 41
Clustering Factor For a B-tree index, the index clustering factor measures the physical grouping of rows in relation to an index value, such as last name. The index clustering factor helps the optimizer decide whether an index scan or full table scan is more efficient for certain queries). A low clustering factor indicates an efficient index scan When a table is accessed through an index the estimated cost of that access is determined by index—not table—statistics. knowing the selectivity isn’t sufficient for the CBO to estimate the cost Imaginary table with ten blocks that each contain 20 rows, making a total of 200 rows in the table. Let us assume that 20 rows in the table have a specific value for a specific column and that column is indexed. If these 20 rows are scattered around the table then there would be approximately two matching rows per block. As you can see, every block in the table would need to be read to obtain these 20 rows, so a full table scan would be more efficient than an indexed access, primarily because multi-block reads could be used to access the table, rather than single block reads. In this case all 20 rows that we select through the index appear in a single block, and now an indexed access would be far more efficient, as only one table block needs to be read. selectivity is 10% (20 rows from 200) so knowing the selectivity isn’t sufficient for the CBO to estimate the cost of access to a table via an index. Enter the clustering factor Table blocks from a weakly clustered index How Index Statistics are Used to Cost Table Access When a table is accessed through an index the estimated cost of that access is determined by index—not table—statistics. On the other hand, index statistics are used to determine neither the number of rows nor the number of bytes returned by the table access operation. That makes sense because the number of rows selected from a table is independent of which index, if any, is used to access the table. Figures 9-1 and 9-2 show how difficult it is to determine the cost of accessing a table through an index. Figure 9-1 shows three table blocks from an imaginary table with ten blocks that each contain 20 rows, making a total of 200 rows in the table. Let us assume that 20 rows in the table have a specific value for a specific column and that column is indexed. If these 20 rows are scattered around the table then there would be approximately two matching rows per block. I have represented this situation by the highlighted rows in Figure 9-1. As you can see, every block in the table would need to be read to obtain these 20 rows, so a full table scan would be more efficient than an indexed access, primarily because multi-block reads could be used to access the table, rather than single block reads. Now take a look at Figure 9-2. In this case all 20 rows that we select through the index appear in a single block, and now an indexed access would be far more efficient, as only one table block needs to be read. Notice that in both Figure 9-1 and Figure 9-2 selectivity is 10% (20 rows from 200) so knowing the selectivity isn’t sufficient for the CBO to estimate the cost of access to a table via an index. Enter the clustering factor. To calculate the cost of accessing a table through an index (as opposed to the cost of accessing the index itself) we multiply the clustering factor by the selectivity. Since strongly clustered indexes have a lower clustering factor than weakly clustered indexes, the cost of accessing the table from a strongly clustered index is lower than that for accessing the table from a weakly clustered index. Listing 9-3 demonstrates how this works in practice. For a B-tree index, the index clustering factor measures the physical grouping of rows in relation to an index value, such as last name. The index clustering factor helps the optimizer decide whether an index scan or full table scan is more efficient for certain queries). A low clustering factor indicates an efficient index scan Table blocks from a strongly clustered index Oracle Database 11g: New Features for Administrators

42 Oracle Database 11g: New Features for Administrators 16 - 42
Estimator The estimator uses three different measures to determine cost: Selectivity :more selective as the selectivity value approaches 0 and less selective (or more unselective) as the value approaches 1. Cardinality : The Rows column in an execution plan shows the estimated cardinality. Cost : This measure represents units of work or resource used. The query optimizer uses disk I/O, CPU usage, and memory usage as units of work. Estimator The estimator is the component of the optimizer that determines the overall cost of a given execution plan. The estimator uses three different measures to determine cost: • Selectivity The percentage of rows in the row set that the query selects, with 0 meaning no rows and 1 meaning all rows. Selectivity is tied to a query predicate, such as WHERE last_name LIKE 'A%', or a combination of predicates. A predicate becomes more selective as the selectivity value approaches 0 and less selective (or more unselective) as the value approaches 1. Note: Selectivity is an internal calculation that is not visible in the execution plans. • Cardinality The cardinality is the number of rows returned by each operation in an execution plan. This input, which is crucial to obtaining an optimal plan, is common to all cost functions. The estimator can derive cardinality from the table statistics collected by DBMS_STATS, or derive it after accounting for effects from predicates (filter, join, and so on), DISTINCT or GROUP BY operations, and so on. The Rows column in an execution plan shows the estimated cardinality. • Cost This measure represents units of work or resource used. The query optimizer uses disk I/O, CPU usage, and memory usage as units of work. As shown in Figure 4-4, if statistics are available, then the estimator uses them to compute the measures. The statistics improve the degree of accuracy of the measures. Oracle Database 11g: New Features for Administrators

43 Oracle Database 11g: New Features for Administrators 16 - 43
Plan Generator The plan generator explores various plans for a query block by trying out different access paths, join methods, and join orders. Many plans are possible because of the various combinations that the database can use to produce the same result. The optimizer picks the plan with the lowest cost. The plan generator explores various plans for a query block by trying out different access paths, join methods, and join orders. Many plans are possible because of the various combinations that the database can use to produce the same result. The optimizer picks the plan with the lowest cost. Figure 4-5 shows the optimizer testing different plans for an input query. Oracle Database 11g: New Features for Administrators

44 Displaying Execution Plans
ID. A number to uniquely identify the operations . operation. There are some 200 operation codes . Name . DB object that is being operated on. Rows . Estimated number of rows that the operation is finally going to produce (a cardinality estimate) Applies to a single invocation of that operation and not the total number of rows returned by all invocations. In this case, operations 1 and 2 correspond to the subquery in the select list and are expected to return an average of about one row each time they are invoked. Bytes column shows an estimate of the number of bytes returned by one invocation of the operation Cost column and time column both display the estimated elapsed time for the statement. The cost column expresses this elapsed time in terms of single-block read units . Displaying the Results of EXPLAIN PLAN The raison d’être for the EXPLAIN PLAN SQL statement is to generate an execution plan for a statement without actually executing the statement. The EXPLAIN PLAN statement only writes a representation of the execution plan to a table. It doesn’t display anything itself. For that, you have to use the DBMS_XPLAN.DISPLAY function. This function takes several arguments, all of which are optional; I will use the defaults for now. The intention is to pass the results of the function to the TABLE operator. Listing 3-1 shows an example that performs an EXPLAIN PLAN for a query using tables from the SCOTT example schema and then displays the results using DBMS_XPLAN.DISPLAY. Listing 3-1. Join of EMP and DEPT tables SET LINES 200 PAGESIZE 0 FEEDBACK OFF EXPLAIN PLAN FOR SELECT e.* ,d.dname ,d.loc , (SELECT COUNT (*) FROM scott.emp i WHERE i.deptno = e.deptno) dept_count FROM scott.emp e, scott.dept d WHERE e.deptno = d.deptno; SELECT * FROM TABLE (DBMS_XPLAN.display); Plan hash value: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 14 | 1638 | 7 (15)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 13 | | | |* 2 | TABLE ACCESS FULL| EMP | 1 | 13 | 3 (0)| 00:00:01 | |* 3 | HASH JOIN | | 14 | 1638 | 7 (15)| 00:00:01 | | 4 | TABLE ACCESS FULL| DEPT | 4 | 120 | 3 (0)| 00:00:01 | | 5 | TABLE ACCESS FULL| EMP | 14 | 1218 | 3 (0)| 00:00:01 | Predicate Information (identified by operation id): 2 - filter("I"."DEPTNO"=:B1) 3 - access("E"."DEPTNO"="D"."DEPTNO") Note ----- dynamic sampling used for this statement (level=2) The statement being explained shows the details of each employee together with details of the department, including the estimated count of employees in the department obtained from a correlated subquery in the select list. The output of DBMS_XPLAN.DISPLAY begins with a plan hash value. If two plans have the same hash value they will be the same plan. In fact, the same execution plan will have the same hash value on different databases, provided that the operations, object names, and predicates are all identical. After the plan hash value you can see the operation table, which provides details of how the runtime engine is expected to set about executing the SQL statement. • The first column is the ID. This is just a number to uniquely identify the operations in the statement for use by other parts of the plan. You will notice that there is an asterisk in front of operations with IDs 2 and 3. This is because they are referred to later in the output. For the sake of brevity I will say “operation 2” rather than “The operation with ID 2” from now on. • The second column is the operation. There are some 200 operation codes, very few of which are officially documented, and the list grows longer with each release. Fortunately, the vast majority of operations in the vast majority of execution plans come from a small subset of a couple of dozen that will become familiar to you as you read this book, if they aren’t already. • The third column in the operation table is called name. This rather non-descript column heading refers to the name of the database object that is being operated on, so to speak. This column isn’t applicable to all operations and so is often blank, but in the case of a TABLE ACCESS FULL operation it is the name of the table being scanned. • The next column is rows. This column provides the estimated number of rows that the operation is finally going to produce, a figure often referred to as a cardinality estimate. So, for example, you can see that operation 2 uses the TABLE ACCESS FULL operation on EMP and will read the same number of rows as ID 5 that performs the same operation on the same object. However, ID 5 returns all 14 rows in the table while operation 2 is expected to return one row. You will see the reason for this discrepancy shortly. Sometimes individual operations are executed more than once in the course of a single execution of a SQL statement. It is important to understand that the estimated row count usually applies to a single invocation of that operation and not the total number of rows returned by all invocations. In this case, operations 1 and 2 correspond to the subquery in the select list and are expected to return an average of about one row each time they are invoked. • The bytes column shows an estimate of the number of bytes returned by one invocation of the operation. In other words, it is the average number of rows multiplied by the average size of a row. • The cost column and the time column both display the estimated elapsed time for the statement. The cost column expresses this elapsed time in terms of single-block read units, and the time column expresses it in more familiar terms. When larger values are displayed it is possible to work out from these figures how long the CBO believes a single-block read will take by dividing time by cost. Here the small numbers and rounding make that difficult. You can also see that the cost column shows an estimate of the percentage of time that the operation will spend on the CPU as opposed to reading data from disk. Join of EMP and DEPT tables Oracle Database 11g: New Features for Administrators

45 How Operations Interact
Operation 0: SELECT STATEMENT : This particular operation always has at least one child and calls the last child first. Operation 1: SORT AGGREGATE Operation 2: TABLE ACCESS FULL Operation 3: HASH JOIN Operations 4 and 5: TABLE FULL SCANs Parent-child relationships in an execution plan How Operations Interact In common with many people, the first question I asked when I was shown my first execution plan was “Where do I start?” A second question was implied: “Where would the runtime engine start?” Many people answer these questions by saying, “the topmost operation amongst the furthest indented,” or something like that. In Oracle 8i this was an oversimplification. After release 9i came out with the hash-join-input-swapping feature that I have already mentioned en passant, that sort of explanation became positively misleading. A correct, if somewhat glib, answer is that you start with the operation that has an ID of 0, i.e., you start at the top! To help explain where to go from there, you actually do need to understand all this indenting stuff. The operations are organized into a tree structure, and operation 0 is the root of the tree. The indentation indicates the parent-child relationships in an intuitive way. Figure 3-1 shows how you might draw the execution plan shown in Listing 3-1 to emphasize the parent-child relationships. 1 2 3 4 5 Figure 3-1. Parent-child relationships in an execution plan Operation 0 makes one or more coroutine calls to operations 1 and 3 to generate rows, operation 1 makes coroutine calls to operation 2, and operation 3 makes coroutine calls to operations 4 and 5. The idea is that a child operation collects a small number of rows and then passes them back to its parent. If the child operation isn’t complete it waits to be called again to continue. THE CONCEPT OF COROUTINES You may be comfortable with the concept of subroutine calls but be less comfortable with the concept of coroutines. If this is the case then it may help to do some background reading on coroutines, as this may help you understand execution plans better. Let me go through the specific operations in Listing 3-1 the way the runtime engine would. Operation 0: SELECT STATEMENT The runtime engine begins with the SELECT STATEMENT itself. This particular operation always has at least one child and calls the last child first. That is, it calls the child with the highest ID—in this case operation 3. It waits for the HASH JOIN to return rows, and, for each row returned, it calls operation 1 to evaluate the correlated subquery in the select list. Operation 1 adds the extra column, and the now complete row is output. If the HASH JOIN isn’t finished it is called again to retrieve more rows. Operation 1: SORT AGGREGATE Operations 1 and 2 relate to the correlated subquery in the select list. I have highlighted the correlated subquery and the associated operations in Listing 3-1. Operation 1 is called multiple times during the course of the SQL statement execution and on each occasion is passed as input the DEPTNO from the main query. It calls its child, operation 2, passing the DEPTNO parameter, to perform a TABLE ACCESS FULL and pass rows back. All SORT AGGREGATE does is to count these rows and discard them. When the count has been returned it passes its single-row, single-column output to its parent: operation 0. You can see that the estimated cardinality for this operation is 1, which confirms the understanding of this operation’s function. Operation 2: TABLE ACCESS FULL Because operation 1 is called multiple times during the course of the SQL statement and operation 1 always calls operation 2 once, operation 2 is called multiple times throughout the course of the statement as well. It performs a TABLE ACCESS FULL on each occasion. As rows are returned from the operation, but before being returned to operation 1, the filter predicate is applied; the DEPTNO from the row is matched with the DEPTNO input parameter, and the row is discarded if it does not match. Operation 3: HASH JOIN This operation performs a join of the EMP and DEPT columns. The operation starts off by calling its first child, operation 4, and as rows are returned they are placed into an in-memory hash cluster. Once all rows from DEPT have been returned and operation 4 is complete, operation 5 begins. As rows from operation 5 are returned they are matched with rows returned from operation 4, and, assuming one or more rows match, they are returned to operation 0 for further processing. In the case of a HASH JOIN (or any join for that matter) there are always exactly two child operations, and they are each invoked once per invocation of the join. Operations 4 and 5: TABLE FULL SCANs These operations perform full table scans on the EMP and DEPT tables and just pass their results back to their parent, operation 3. There are no predicates on these operations and, in this case, no child operations. Join of EMP and DEPT tables Oracle Database 11g: New Features for Administrators

46 How Operations Interact
Operation 0: SELECT STATEMENT : This particular operation always has at least one child and calls the last child first. Operation 1: SORT AGGREGATE Operation 2: TABLE ACCESS FULL Operation 3: HASH JOIN Operations 4 and 5: TABLE FULL SCANs Parent-child relationships in an execution plan How Operations Interact In common with many people, the first question I asked when I was shown my first execution plan was “Where do I start?” A second question was implied: “Where would the runtime engine start?” Many people answer these questions by saying, “the topmost operation amongst the furthest indented,” or something like that. In Oracle 8i this was an oversimplification. After release 9i came out with the hash-join-input-swapping feature that I have already mentioned en passant, that sort of explanation became positively misleading. A correct, if somewhat glib, answer is that you start with the operation that has an ID of 0, i.e., you start at the top! To help explain where to go from there, you actually do need to understand all this indenting stuff. The operations are organized into a tree structure, and operation 0 is the root of the tree. The indentation indicates the parent-child relationships in an intuitive way. Figure 3-1 shows how you might draw the execution plan shown in Listing 3-1 to emphasize the parent-child relationships. 1 2 3 4 5 Figure 3-1. Parent-child relationships in an execution plan Operation 0 makes one or more coroutine calls to operations 1 and 3 to generate rows, operation 1 makes coroutine calls to operation 2, and operation 3 makes coroutine calls to operations 4 and 5. The idea is that a child operation collects a small number of rows and then passes them back to its parent. If the child operation isn’t complete it waits to be called again to continue. THE CONCEPT OF COROUTINES You may be comfortable with the concept of subroutine calls but be less comfortable with the concept of coroutines. If this is the case then it may help to do some background reading on coroutines, as this may help you understand execution plans better. Let me go through the specific operations in Listing 3-1 the way the runtime engine would. Operation 0: SELECT STATEMENT The runtime engine begins with the SELECT STATEMENT itself. This particular operation always has at least one child and calls the last child first. That is, it calls the child with the highest ID—in this case operation 3. It waits for the HASH JOIN to return rows, and, for each row returned, it calls operation 1 to evaluate the correlated subquery in the select list. Operation 1 adds the extra column, and the now complete row is output. If the HASH JOIN isn’t finished it is called again to retrieve more rows. Operation 1: SORT AGGREGATE Operations 1 and 2 relate to the correlated subquery in the select list. I have highlighted the correlated subquery and the associated operations in Listing 3-1. Operation 1 is called multiple times during the course of the SQL statement execution and on each occasion is passed as input the DEPTNO from the main query. It calls its child, operation 2, passing the DEPTNO parameter, to perform a TABLE ACCESS FULL and pass rows back. All SORT AGGREGATE does is to count these rows and discard them. When the count has been returned it passes its single-row, single-column output to its parent: operation 0. You can see that the estimated cardinality for this operation is 1, which confirms the understanding of this operation’s function. Operation 2: TABLE ACCESS FULL Because operation 1 is called multiple times during the course of the SQL statement and operation 1 always calls operation 2 once, operation 2 is called multiple times throughout the course of the statement as well. It performs a TABLE ACCESS FULL on each occasion. As rows are returned from the operation, but before being returned to operation 1, the filter predicate is applied; the DEPTNO from the row is matched with the DEPTNO input parameter, and the row is discarded if it does not match. Operation 3: HASH JOIN This operation performs a join of the EMP and DEPT columns. The operation starts off by calling its first child, operation 4, and as rows are returned they are placed into an in-memory hash cluster. Once all rows from DEPT have been returned and operation 4 is complete, operation 5 begins. As rows from operation 5 are returned they are matched with rows returned from operation 4, and, assuming one or more rows match, they are returned to operation 0 for further processing. In the case of a HASH JOIN (or any join for that matter) there are always exactly two child operations, and they are each invoked once per invocation of the join. Operations 4 and 5: TABLE FULL SCANs These operations perform full table scans on the EMP and DEPT tables and just pass their results back to their parent, operation 3. There are no predicates on these operations and, in this case, no child operations. Join of EMP and DEPT tables Oracle Database 11g: New Features for Administrators

47 Joins : Nested Loop Join
simplest join algorithm. Accepts two inputs, which are outer and inner tables. Inner nested loop join algorithm : for each row R1 in outer table for each row R2 in inner table if R1 joins with R2 return join (R1, R2) The cost grows quickly with the size of the inputs; therefore, a nested loop join is efficient when at least one of the inputs is small Joins There are multiple variations of physical join operators in SQL Server, which dictate how join predicates are matched and what is included in the resulting row. However, in terms of algorithms, there are just three join types: nested loop, merge, and hash joins. Nested Loop Join A nested loop join is simplest join algorithm. As with any join type, it accepts two inputs, which are called outer and inner tables. The algorithm for an inner nested loop join is shown in Listing 25-6, and the algorithm for an outer nested loop join is shown in Listing 25-7. Listing Inner nested loop join algorithm for each row R1 in outer table for each row R2 in inner table if R1 joins with R2 return join (R1, R2) Oracle Database 11g: New Features for Administrators

48 Oracle Database 11g: New Features for Administrators 16 - 48
Joins : Merge Join The merge join works with two sorted inputs. It compares two rows, one at time, and returns their join to the client if they are equal. Otherwise, it discards the lesser value and moves on to the next row in the input. The cost is proportional to the sum of the sizes of both inputs . More efficient on large inputs as compared to a nested loop join. Requires both inputs to be sorted, which is often the case when inputs are indexed on the join key column. /* Prerequirements: Inputs I1 and I2 are sorted */ get first row R1 from input I1 get first row R2 from input I2 while not end of either input begin if R1 joins with R2 return join (R1, R2) get next row R2 from I2 end else if R1 < R2 get next row R1 from I1 else /* R1 > R2 */ Merge Join The merge join works with two sorted inputs. It compares two rows, one at time, and returns their join to the client if they are equal. Otherwise, it discards the lesser value and moves on to the next row in the input. Contrary to nested loop joins, a merge join requires at least one equality predicate on the join keys. Listing 25-8 shows the algorithm for the inner merge join. Listing Inner merge join algorithm /* Prerequirements: Inputs I1 and I2 are sorted */ get first row R1 from input I1 get first row R2 from input I2 while not end of either input begin if R1 joins with R2 return join (R1, R2) get next row R2 from I2 end else if R1 < R2 get next row R1 from I1 else /* R1 > R2 */ The cost of the merge join algorithm is proportional to the sum of the sizes of both inputs, which makes it more efficient on large inputs as compared to a nested loop join. However, a merge join requires both inputs to be sorted, which is often the case when inputs are indexed on the join key column. In some cases, however, SQL Server may decide to sort input(s) using the Sort operator before a merge join. The cost of the sort obviously needs to be factored together with the cost of the join operator during its analysis. Oracle Database 11g: New Features for Administrators

49 Oracle Database 11g: New Features for Administrators 16 - 49
Joins : Hash Join Designed to handle large unsorted inputs During the first, or build, phase, a hash join scans one of inputs, calculates the hash values of the join key, and places it into the hash table. Next, in the second, or probe, phase, it scans the second input, and checks, or probes, if the hash value of the join key from second input exists in the hash table. When this is the case, Oracle evaluates the join predicate for the row from the second input and all rows from the first input, which belong to the same hash bucket. This comparison must be done because the algorithm that calculates the hash values does not guarantee the uniqueness of the hash value of individual keys, which leads to hash collision when multiple different keys generate the same hash. Even though there is the possibility of additional overhead from the extra comparison operations due to hash collisions, those situations are relatively rare. /* Build Phase */ for each row R1 in input I1 begin calculate hash value on R1 join key insert hash value to appropriate bucket in hash table end /* Probe Phase */ for each row R2 in input I2 calculate hash value on R2 join key for each row R1 in hash table bucket if R1 joins with R2 return join (R1, R2) Hash Join Unlike the nested loop join, which works best on small inputs, and merge join, which excels on sorted inputs, a hash join is designed to handle large unsorted inputs. The hash join algorithm consists of two different phases. During the first, or build, phase, a hash join scans one of inputs, calculates the hash values of the join key, and places it into the hash table. Next, in the second, or probe, phase, it scans the second input, and checks, or probes, if the hash value of the join key from second input exists in the hash table. When this is the case, SQL Server evaluates the join predicate for the row from the second input and all rows from the first input, which belong to the same hash bucket. This comparison must be done because the algorithm that calculates the hash values does not guarantee the uniqueness of the hash value of individual keys, which leads to hash collision when multiple different keys generate the same hash. Even though there is the possibility of additional overhead from the extra comparison operations due to hash collisions, those situations are relatively rare. Listing 25-9 shows the algorithm of an inner hash join. Listing Inner hash join algorithm /* Build Phase */ for each row R1 in input I1 begin calculate hash value on R1 join key insert hash value to appropriate bucket in hash table end /* Probe Phase */ for each row R2 in input I2 calculate hash value on R2 join key for each row R1 in hash table bucket if R1 joins with R2 return join (R1, R2) As you can guess, a hash join requires memory to store the hash table. The performance of a hash join greatly depends on correct memory grant estimation. When the memory estimation is incorrect, the hash join stores some hash table buckets in tempdb, which can greatly reduce the performance of the operator. When this happens, SQL Server tracks where the buckets are located: either in-memory or on-disk. For each row from the second input, it checks where the hash bucket is located. If it is in-memory, SQL Server processes the row immediately. Otherwise, it stores the row in another internal temporary table in tempdb. After the first pass is done, SQL Server discards in-memory buckets, replacing them with the buckets from disk and repeats the probe phase for all of the remaining rows from the second input that were stored in the internal temporary table. If there still weren’t enough memory to accommodate all hash buckets, some of them would be spilled on-disk again. The number of times when this happens is called the recursion level. SQL Server tracks it and, eventually, switches to a special bailout algorithm, which is less efficient, although it’s guaranteed to complete at some point. Oracle Database 11g: New Features for Administrators

50 Oracle Database 11g: New Features for Administrators 16 - 50
Comparing Join Types One of the common mistakes people make during performance tuning is relying strictly on the number of logical reads produced by the query. For example, it is entirely possible that a hash join produces fewer reads as compared to a nested loop. However, it would not factor in CPU usage and memory overhead or the performance implication in the case of tempdb spills and bailouts. Merge join is it is more efficient than a nested loop on sorted inputs . Oracle Database 11g: New Features for Administrators

51 Adaptive Query Optimization
A set of capabilities that enables the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics. Adaptive optimization is helpful when existing statistics are not sufficient to generate an optimal plan. About Adaptive Query Optimization In Oracle Database, adaptive query optimization is a set of capabilities that enables the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics. Adaptive optimization is helpful when existing statistics are not sufficient to generate an optimal plan. The following graphic shows the feature set for adaptive query optimization:. Oracle Database 11g: New Features for Administrators

52 Oracle Database 11g: New Features for Administrators 16 - 52
Optimizer Statistics Oracle Database 11g: New Features for Administrators

53 Oracle Database 11g: New Features for Administrators 16 - 53
Gathering Statistics Oracle Database 11g: New Features for Administrators

54 Oracle Database 11g: New Features for Administrators 16 - 54
Gathering Statistics Oracle Database 11g: New Features for Administrators

55 Oracle Database 11g: New Features for Administrators 16 - 55
Histograms The additional information needed by the query optimizer to get information about the nonuniform distribution of data is called a histogram. Prior to version 12.1, two types of histograms are available: frequency histograms and height-balanced histograms. Oracle Database 12.1 introduces two additional types to replace height-balanced histograms: top frequency histograms and hybrid histograms. SQL> SELECT val2, count(*) 2 FROM t 3 GROUP BY val2 4 ORDER BY val2; VAL2 COUNT(*) If data isn’t uniformly distributed, the query optimizer can’t compute acceptable estimations without additional information. For example, given the data set stored in the val2 column , how could the query optimizer make a meaningful estimation for a predicate like val2=105? It can’t, because it has no clue that about 50 percent of the rows fulfill that predicate: Histograms The query optimizer starts from the principle that data is uniformly distributed. An example of a uniformly distributed set of data is the one stored in the id column in the test table used throughout the previous sections. In fact, it stores all integers from 1 up to 1,000 exactly once. In such a case, to produce a good estimate of the number of rows filtered out by a predicate based on that column (for example, id BETWEEN 6 AND 19), the query optimizer requires only the object statistics described in the preceding section: the minimum value, the maximum value, and the number of distinct values. If data isn’t uniformly distributed, the query optimizer can’t compute acceptable estimations without additional information. For example, given the data set stored in the val2 column (see the output of the following query), how could the query optimizer make a meaningful estimation for a predicate like val2=105? It can’t, because it has no clue that about 50 percent of the rows fulfill that predicate: SQL> SELECT val2, count(*) 2 FROM t 3 GROUP BY val2 4 ORDER BY val2; VAL2 COUNT(*) 101 8 102 25 103 68 Oracle Database 11g: New Features for Administrators

56 Oracle Database 11g: New Features for Administrators 16 - 56
Histograms The additional information needed by the query optimizer to get information about the nonuniform distribution of data is called a histogram. Prior to version 12.1, two types of histograms are available: frequency histograms and height-balanced histograms. Oracle Database 12.1 introduces two additional types to replace height-balanced histograms: top frequency histograms and hybrid histograms. SQL> SELECT val2, count(*) 2 FROM t 3 GROUP BY val2 4 ORDER BY val2; VAL2 COUNT(*) If data isn’t uniformly distributed, the query optimizer can’t compute acceptable estimations without additional information. For example, given the data set stored in the val2 column , how could the query optimizer make a meaningful estimation for a predicate like val2=105? It can’t, because it has no clue that about 50 percent of the rows fulfill that predicate: Histograms The query optimizer starts from the principle that data is uniformly distributed. An example of a uniformly distributed set of data is the one stored in the id column in the test table used throughout the previous sections. In fact, it stores all integers from 1 up to 1,000 exactly once. In such a case, to produce a good estimate of the number of rows filtered out by a predicate based on that column (for example, id BETWEEN 6 AND 19), the query optimizer requires only the object statistics described in the preceding section: the minimum value, the maximum value, and the number of distinct values. If data isn’t uniformly distributed, the query optimizer can’t compute acceptable estimations without additional information. For example, given the data set stored in the val2 column (see the output of the following query), how could the query optimizer make a meaningful estimation for a predicate like val2=105? It can’t, because it has no clue that about 50 percent of the rows fulfill that predicate: SQL> SELECT val2, count(*) 2 FROM t 3 GROUP BY val2 4 ORDER BY val2; VAL2 COUNT(*) 101 8 102 25 103 68 The additional information needed by the query optimizer to get information about the nonuniform distribution of data is called a histogram. Prior to version 12.1, two types of histograms are available: frequency histograms and height-balanced histograms. Oracle Database 12.1 introduces two additional types to replace height-balanced histograms: top frequency histograms and hybrid histograms. ■■Caution The dbms_stats package builds top frequency histograms and hybrid histograms only when the sampling used for gathering the object statistics is based on dbms_stats.auto_sample_size (later in this chapter, the “Gathering Options” section covers this topic). Oracle Database 11g: New Features for Administrators

57 Ex: Frequency Histograms
The frequency histogram stored in the data dictionary is similar to this representation. SQL> SELECT endpoint_value, endpoint_number, 2 endpoint_number - lag(endpoint_number,1,0) 3 OVER (ORDER BY endpoint_number) AS frequency 4 FROM user_tab_histograms 5 WHERE table_name = 'T' 6 AND column_name = 'VAL2' 7 ORDER BY endpoint_number; Frequency Histograms The frequency histogram is what most people understand by the term histogram. Figure 8-2 is an example of this type, which shows a common graphical representation of the data returned by the previous query The frequency histogram stored in the data dictionary is similar to this representation. The main difference is that instead of the frequency, the cumulated frequency is used. The following query turns the cumulated frequency into the frequency by computing the difference between two consecutive bucket values (notice that the endpoint_number column is the cumulated frequency): SQL> SELECT endpoint_value, endpoint_number, 2 endpoint_number - lag(endpoint_number,1,0) 3 OVER (ORDER BY endpoint_number) AS frequency 4 FROM user_tab_histograms 5 WHERE table_name = 'T' 6 AND column_name = 'VAL2' 7 ORDER BY endpoint_number; ENDPOINT_VALUE ENDPOINT_NUMBER FREQUENCY The essential characteristics of a frequency histogram are the following: The number of b • uckets (in other words, the number of categories) is the same as the number of distinct values. A row in available for each bucket in a view like user_tab_histograms. • The endpoint_value column provides a numerical representation of the value itself. Hence, for non-numerical datatypes, the actual values must be encoded in a number. Depending on the data, the datatype, and the version, the actual values might be visible in the endpoint_actual_value column (not shown in the previous output). It’s essential to know that values stored in histograms are distinguished based only on their leading 32 bytes (64 bytes as of version 12.1). As a result, long fixed prefixes might jeopardize the effectiveness of histograms. This is especially true for multibyte character sets where each character might take up to three bytes. • The endpoint_number column provides the cumulated frequency of the value. To get the frequency itself, the value of the endpoint_number column of the previous row must be subtracted. Oracle Database 11g: New Features for Administrators


Download ppt "מימוש מערכות מסדי נתונים (236510)"

Similar presentations


Ads by Google