1 of 38 Session #4429 Calling SQL from PL/SQL: The Right Way Michael Rosenblum Dulcian, Inc. www.dulcian.com.

Slides:



Advertisements
Similar presentations
Phoenix We put the SQL back in NoSQL James Taylor Demos:
Advertisements

Tuning Oracle SQL The Basics of Efficient SQLThe Basics of Efficient SQL Common Sense Indexing The Optimizer –Making SQL Efficient Finding Problem Queries.
Overview of performance tuning strategies Oracle Performance Tuning Allan Young June 2008.
SQL Objects and PL/SQL. Who am I ?  Gary Myers  Oracle developer since 1994  Database Consultant with SMS M&T  Blogger since 2004 Now at blog.sydoracle.com.
AN INTRODUCTION TO PL/SQL Mehdi Azarmi 1. Introduction PL/SQL is Oracle's procedural language extension to SQL, the non-procedural relational database.
PL/SQL. Introduction to PL/SQL PL/SQL is the procedure extension to Oracle SQL. It is used to access an Oracle database from various environments (e.g.
Copyright © 200\8 Quest Software High Performance PL/SQL Guy Harrison Chief Architect, Database Solutions.
Murali Mani Persistent Stored Modules (Stored Procedures) : PSM.
Clarity Educational Community Clarity Educational Community Creating and Tuning SQL Queries that Engage Users.
Chapter 4B: More Advanced PL/SQL Programming
Agenda Overview of the optimizer How SQL is executed Identifying statements that need tuning Explain Plan Modifying the plan.
PHP (2) – Functions, Arrays, Databases, and sessions.
PL/SQL Bulk Collections in Oracle 9i and 10g Kent Crotty Burleson Consulting October 13, 2006.
Introduction to PL/SQL. Procedural Language extension for SQL Oracle Proprietary 3GL Capabilities Integration of SQL Portable within Oracle data bases.
ORACLE ONLINE TRAINING Contact our Support Team : SOFTNSOL India: Skype id : softnsoltrainings id:
Bordoloi and Bock CURSORS. Bordoloi and Bock CURSOR MANIPULATION To process an SQL statement, ORACLE needs to create an area of memory known as the context.
PL / SQL P rocedural L anguage / S tructured Q uery L anguage Chapter 7 in Lab Reference.
Oracle10g Developer: PL/SQL Programming1 Objectives Manipulating data with cursors Managing errors with exception handlers Addressing exception-handling.
Database Systems: Design, Implementation, and Management Eighth Edition Chapter 10 Database Performance Tuning and Query Optimization.
An Improved Approach to Generating Configuration Files from a Database Jon Finke Rensselaer Polytechnic Institute.
SAGE Computing Services Customised Oracle Training Workshops and Consulting Are you making the most of PL/SQL? Hints and tricks and things you may have.
Oracle PL/SQL Programming Steven Feuerstein All About the (Amazing) Function Result Cache of Oracle Database 11g.
Oracle Database Administration Lecture 6 Indexes, Optimizer, Hints.
Physical Database Design & Performance. Optimizing for Query Performance For DBs with high retrieval traffic as compared to maintenance traffic, optimizing.
Lecture 4 PL/SQL language. PL/SQL – procedural SQL Allows combining procedural and SQL code PL/SQL code is compiled, including SQL commands PL/SQL code.
Dinamic SQL & Cursor. Why Dinamic SQL ? Sometimes there is a need to dynamically create a SQL statement on the fly and then run that command. This can.
Ashwani Roy Understanding Graphical Execution Plans Level 200.
From Zero to Hero : Using an assortment of caching techniques to improve performance of SQL containing PL/SQL calls. Tim Hall.
Oracle PL/SQL Practices. Critical elements of PL/SQL Best Practices Build your development toolbox Unit test PL/SQL programs Optimize SQL in PL/SQL programs.
11-1 Improve response time of interactive programs. Improve batch throughput. To ensure scalability of applications load vs. performance. Reduce system.
1 Theory, Practice & Methodology of Relational Database Design and Programming Copyright © Ellis Cohen Cursors These slides are licensed under.
Autonomous Transactions: Extending the Possibilities Michael Rosenblum Dulcian, Inc. April 14, 2008 Presentation #419.
Using Procedures & Functions Oracle Database PL/SQL 10g Programming Chapter 9.
Advanced SQL: Cursors & Stored Procedures
Mark Inman U.S. Navy (Naval Sea Logistics Center) Session #213 Analytic SQL for Beginners.
PL/SQL Block Structure DECLARE - Optional Variables, cursors, user-defined exceptions BEGIN - Mandatory SQL Statements PL/SQL Statements EXCEPTIONS - Optional.
Oracle 8i PL/SQL Collections. Collections A Collection is a group of elements of the same kind There are three types of Collections that you can use in.
Chapter 15 Introduction to PL/SQL. Chapter Objectives  Explain the benefits of using PL/SQL blocks versus several SQL statements  Identify the sections.
Collections Oracle Database PL/SQL 10g Programming Chapter 6.
Dynamic SQL Oracle Database PL/SQL 10g Programming Chapter 13.
CHAPTER 9 Views, Synonyms, and Sequences. Views are used extensively in reporting applications and also to present subsets of data to applications. Synonyms.
Module 4 Database SQL Tuning Section 3 Application Performance.
PL/SQLPL/SQL Oracle11g: PL/SQL Programming Chapter 4 Cursors and Exception Handling.
ITEC 224 Database Programming PL/SQL Lab Cursors.
Tracing Individual Users in Connection-pooled Environments with Oracle 10g Terry Sutton Database Specialists, Inc.
Ad Hoc User or Application Cost-based Data Dictionary Statistics Rule-based Execution Plan Asks the question: All people and their grades in a list giving.
Optimization and Administartion of a Database Management Systems Krystian Zieja.
Text TCS INTERNAL Oracle PL/SQL – Introduction. TCS INTERNAL PL SQL Introduction PLSQL means Procedural Language extension of SQL. PLSQL is a database.
Oracle9i Developer: PL/SQL Programming Chapter 11 Performance Tuning.
Advanced SQL: Cursors & Stored Procedures Instructor: Mohamed Eltabakh 1.
Database Systems, 8 th Edition SQL Performance Tuning Evaluated from client perspective –Most current relational DBMSs perform automatic query optimization.
SQL Triggers, Functions & Stored Procedures Programming Operations.
Scott Fallen Sales Engineer, SQL Sentry Blog: scottfallen.blogspot.com.
Same Plan Different Performance Mauro Pagano. Consultant/Developer/Analyst Oracle  Enkitec  Accenture DBPerf and SQL Tuning Training Tools (SQLT, SQLd360,
Diving into Query Execution Plans ED POLLACK AUTOTASK CORPORATION DATABASE OPTIMIZATION ENGINEER.
Closing the Query Processing Loop in Oracle 11g Allison Lee, Mohamed Zait.
Preface IIntroduction Course Objectives I-2 Oracle Complete Solution I-3 Course Agenda I-4 Tables Used in This Course I-5 The Order Entry Schema I-6 The.
Tuning Oracle SQL The Basics of Efficient SQL Common Sense Indexing
COMP 430 Intro. to Database Systems
Oracle11g: PL/SQL Programming Chapter 4 Cursors and Exception Handling.
Database Application Development
Database Performance Tuning and Query Optimization
Server-Side Development for the Cloud
From 4 Minutes to 8 Seconds in an Hour (or a bit less)
Top Tips for Better TSQL Stored Procedures
Server-Side Development for the Cloud
Chapter 2 Handling Data in PL/SQL Blocks Oracle9i Developer:
Chapter 11 Database Performance Tuning and Query Optimization
Performance Tuning ETL Process
Database Application Development
Presentation transcript:

1 of 38 Session #4429 Calling SQL from PL/SQL: The Right Way Michael Rosenblum Dulcian, Inc.

2 of 38 Who Am I? – “Misha”  Oracle ACE  Co-author of 3 books  PL/SQL for Dummies  Expert PL/SQL Practices  PL/SQL Performance Tuning (Aug 2014)  Won ODTUG 2009 Speaker of the Year  Known for:  SQL and PL/SQL tuning  Complex functionality  Code generators  Repository-based development

3 of 38 Groundwork

4 of 38 Oracle Optimization  It is all about CURSORs!  Using cursors does not imply only row-by-row processing because…  Cursors point to SETs  Even internally Oracle is using bulk optimization  Pre-fetching 100 rows at a time  Started in Oracle 10g

5 of 38 Proof (1) create table test_tab as select * from all_objects where rownum <= 50000; declare v_nr number; begin dbms_monitor.session_trace_enable (waits=>true, binds=>true); for c in (select * from test_tab where rownum < 1000) loop v_nr:=c.object_id; end loop; dbms_monitor.session_trace_disable; end;

6 of 38 Proof (2) -- TKPROF output SQL ID: dyxt87m2np50t Plan Hash: SELECT * FROM TEST_TAB WHERE ROWNUM < 1000 call count rows Parse 1 0 Execute 1 0 Fetch total Last fetch returns less than 100 rows

7 of 38 Proof (3) declare v_nr number; begin dbms_monitor.session_trace_enable(waits=>true, binds=>true); for c in (select * from test_tab where rownum < 1001) loop v_nr:=c.object_id; end loop; dbms_monitor.session_trace_disable; end; -- TKPROF output SQL ID: 544c85gf7tn8f Plan Hash: SELECT * FROM TEST_TAB WHERE ROWNUM < 1001 call count rows Parse 1 0 Execute 1 0 Fetch total Last fetch returns 100 rows, so one extra is needed.

8 of 38 So… If Oracle is using sets internally, you should start using them too!

9 of 38 Loading Sets from SQL to PL/SQL

10 of 38 What kind of Sets?  Oracle collection datatypes:  Nested tables  Also called object collections  In both SQL and PL/SQL  VARRAYs  In both SQL and PL/SQL  Associative arrays  Also called PL/SQL tables or INDEX-BY Tables  Two variations: INDEX BY BINARY_INTEGER or INDEX BY VARCHAR2  PL/SQL-only

11 of 38 Usability  Associative arrays are useful when:  You work only within PL/SQL.  You need the index to be a text instead of a number.  Nested tables are useful when:  You need to use collections both in SQL and PL/SQL.  VARRAYS are useful….  Sorry, never needed in the last 15 years…

12 of 38 Research  Task:  Data needs to be retrieved from a remote location via DBLink.  Each row has to be processed locally.  Source table contains 50,000 rows.  Problem:  Analyze different ways of achieving the goal.  Create best practices.

13 of 38 Use Case #1  Extreme options:  Row-by-row processing  BULK COLLECT everything in the local object collection beforehand  Limited scope:  Only one column from the source table is touched

14 of 38 Use Case #1 – BULK SQL> connect sql> declare 2 type number_tt is table of number; 3 v_tt number_tt; 4 v_nr number; 5 begin 6 select object_id 7 bulk collect into v_tt 8 from 9 for i in v_tt.first..v_tt.last loop 10 v_nr:=v_tt(i); 11 end loop; 12 end; 13 / Elapsed: 00:00:00.09 SQL> select name, value from stats where name in 2 ('STAT...session pga memory max', 3 'STAT...SQL*Net roundtrips to/from dblink'); NAME VALUE STAT...session pga memory max STAT...SQL*Net roundtrips to/from dblink 10

15 of 38 Use Case #1 – RowByRow SQL> connect sql> declare 2 v_nr number; 3 begin 4 for c in (select object_id 5 from 6 loop 7 v_nr :=c.object_id; 8 end loop; 9 end; 10 / Elapsed: 00:00:00.42 SQL> select name, value from stats where name in 2 ('STAT...session pga memory max', 3 'STAT...SQL*Net roundtrips to/from dblink'); NAME VALUE STAT...session pga memory max STAT...SQL*Net roundtrips to/from dblink 510

16 of 38 Use Case #1 - Analysis  Results:  Summary:  BULK COLLECT is faster and less network-intensive  … but it uses more memory – even for a single column!  Conclusion:  More tests are needed! BulkRow By Row Processing Time PGA memory max3’330’4002’543’968 SQL*Net roundtrips to/from dblink 10510

17 of 38 Use Case #2  Scope change:  Get all columns from the source table (15 total) Name Data Type OWNER VARCHAR2(30 BYTE) NOT NULL OBJECT_NAME VARCHAR2(30 BYTE) NOT NULL SUBOBJECT_NAME VARCHAR2(30 BYTE) OBJECT_ID NUMBER NOT NULL DATA_OBJECT_ID NUMBER OBJECT_TYPE VARCHAR2(19 BYTE) CREATED DATE NOT NULL LAST_DDL_TIME DATE NOT NULL TIMESTAMP VARCHAR2(19 BYTE) STATUS VARCHAR2(7 BYTE) TEMPORARY VARCHAR2(1 BYTE) GENERATED VARCHAR2(1 BYTE) SECONDARY VARCHAR2(1 BYTE) NAMESPACE NUMBER NOT NULL EDITION_NAME VARCHAR2(30 BYTE)

18 of 38 Use Case #2 – BULK sql> connect sql> declare 2 type table_tt is table of 3 v_tt table_tt; 4 v_nr number; 5 begin 6 select * 7 bulk collect into v_tt 8 from 9 for i in v_tt.first..v_tt.last loop 10 v_nr:=v_tt(i).object_id; 11 end loop; 12 end; 13 / Elapsed: 00:00:00.51 SQL> select name, value from stats where name in 2 ('STAT...session pga memory max', 3 'STAT...SQL*Net roundtrips to/from dblink'); NAME VALUE STAT...session pga memory max STAT...SQL*Net roundtrips to/from dblink 10

19 of 38 Use Case #2 – RowByRow SQL> connect sql> declare 2 v_nr number; 3 begin 4 for c in (select * from loop 5 v_nr :=c.object_id; 6 end loop; 7 end; 8 / Elapsed: 00:00:00.77 SQL> select name, value from stats where name in 2 ('STAT...session pga memory max', 3 'STAT...SQL*Net roundtrips to/from dblink'); NAME VALUE STAT...session pga memory max STAT...SQL*Net roundtrips to/from dblink 510

20 of 38 Use Case #2 - Analysis  Results:  Summary:  BULK COLLECT is still faster  … but memory usage is wa-a-a-ay up!  Conclusion:  “Bulk everything” may cause major problems if your system is memory-bound!  It may cause the database to slow down. BulkRow By Row Processing Time PGA memory max34’656’6082’609’504 SQL*Net roundtrips to/from dblink 10510

21 of 38 Use Case #3  Walking the line:  FETCH … BULK COLLECT LIMIT decreases the memory workload, while still using bulk operations.  It does not make sense to test limits less than 100 because that is Oracle’s internal pre-fetch size.

22 of 38 Use Case #3 – BULK LIMIT sql> declare 2 type collection_tt is table of 4 v_tt collection_tt; 5 v_nr number; 6 v_cur sys_refcursor; 7 v_limit_nr binary_integer:=5000; 8 begin 9 open v_cur for select * from 10 loop 11 fetch v_cur bulk collect into v_tt 12 limit v_limit_nr; 13 exit when v_tt.count()=0; 14 for i in v_tt.first..v_tt.last loop 15 v_nr:=v_tt(i).object_id; 16 end loop; 17 exit when v_tt.count<v_limit_nr; 18 end loop; 19 close v_cur; 20 end; 21 / Limit can variable

23 of 38 Use Case #3 - Analysis  Results:  Summary:  With the increase of bulk limit processing, time stops dropping because memory management becomes costly!  This point is different for different hardware/software  Conclusion:  Run your own tests and find the most efficient bulk limit Limit sizeTimeMax PGARoundtrips ’543’ ’675’ ’806’ ’133’ ’247’ ’590’ ’340’44812

24 of 38 Pagination vs. Continuous Fetch

25 of 38 Row-Limiting Clause (1)  New feature in Oracle 12c SELECT … FROM … WHERE … OFFSET ROWS FETCH NEXT  Question: Can it be an alternative to continuous fetch?

26 of 38 Row-limiting clause (2) SQL> exec runstats_pkg.rs_start; sql> exec runstats_pkg.rs_middle; sql> declare 2 type table_tt is table of test_tab%rowtype; 3 v_tt table_tt; 4 v_nr number; 6 v_limit_nr constant number:=5000; 7 v_counter_nr number:=0; 8 begin 9 loop 10 select * 11 bulk collect into v_tt 12 from test_tab 13 offset v_counter_nr*v_limit_nr rows 14 fetch next 5000 rows only; 16 exit when v_tt.count()=0; 17 for i in v_tt.first..v_tt.last loop 18 v_nr:=v_tt(i).object_id; 19 end loop; 20 exit when v_tt.count<v_limit_nr; 22 v_counter_nr:=v_counter_nr+1; 23 end loop; 24 end; 25 / Limitation/bug: has to be hardcoded for now No DBLink - less moving parts!

27 of 38 Row-Limiting Clause Analysis  Results: SQL> exec runstats_pkg.rs_stop; Run1 ran in 33 cpu hsecs Run2 ran in 78 cpu hsecs Name Run1 Run2 STAT...consistent gets 900 5,360 STAT...logical read bytes from cache 7,331,840 44,269,568  Summary:  Row-limiting clause cannot substitute continuous fetch  it causes tables to be re-read multiple times.

28 of 38 Proof of Multiple Reads  Original query: SELECT * FROM test_tab OFFSET 5000 ROWS FETCH NEXT 5000 ROWS ONLY  Section “Unparsed Query” of trace: SELECT … FROM (SELECT "TEST_TAB".*, ROW_NUMBER () OVER (ORDER BY NULL) "rowlimit_$$_rownumber" FROM "HR"."TEST_TAB" "TEST_TAB") "from$_subquery$_002" WHERE "from$_subquery$_002"."rowlimit_$$_rownumber" <= CASE WHEN (5000 >= 0) THEN FLOOR (TO_NUMBER (5000)) ELSE 0 END AND "from$_subquery$_002"."rowlimit_$$_rownumber" > 5000

29 of 38 Merging Sets in PL/SQL

30 of 38 Core Use Case  The story:  Generic “Attention” folder with a lot of conditions is supported by a single view.  Each set of conditions is represented by a SQL query.  Results are merged using UNION ALL.  Problem:  View became unmaintainable.  Solution:  PL/SQL function that returns object collection.

31 of 38 Code Sample create type emp_search_ot as object (empno_nr number,empno_dsp varchar2(256), comp_nr number); create type emp_search_nt is table of emp_search_ot; create function f_attention_ot (i_empno number) return emp_search_nt is v_emp_rec emp%rowtype; v_sub_nt emp_search_nt; v_comm_nt emp_search_nt; v_out_nt emp_search_nt; begin -- load information about the logged user select * into v_emp_rec from emp where empno=i_empno; -- get subordinates if v_emp_rec.job = 'manager' then –- directly reporting... query 1... into v_sub_nt... elsif v_emp_rec.job = 'president' then -- get everybody except himself... query 2... into v_sub_nt... end if; -- check all people with commissions from other departments if v_emp_rec.job in ('manager','analyst') then... query 1... into v_comm_nt... end if; -- merge two collection together v_out_nt:=v_sub_nt multiset union distinct v_comm_nt; return v_out_nt; end;

32 of 38 Cost of MULTISET(1)  Question:  How much overhead is created by doing MULTISET operations instead of pure SQL?  Answer:  Interesting to know!  Test setup:  Read large number of rows (~36,000 total) out of two similar tables (50,000 rows each) and put them together using all available mechanisms

33 of 38 Cost of MULTISET – Setup(1) create table test_tab2 as select * from test_tab; create type test_tab_ot as object (owner_tx varchar2(30), name_tx varchar2(30), object_id number, type_tx varchar2(30)); create type test_tab_nt is table of test_tab_ot; create function f_seachtesttab_tt (i_type_tx varchar2) return test_tab_nt is v_out_tt test_tab_nt; begin select test_tab_ot(owner, object_name, object_id, object_type) bulk collect into v_out_tt from test_tab where object_type = i_type_tx; return v_out_tt; end; create function f_seachtesttab2_tt (i_type_tx varchar2) return test_tab_nt is v_out_tt test_tab_nt; begin select test_tab_ot(owner, object_name, object_id, object_type) bulk collect into v_out_tt from test_tab2 where object_type = i_type_tx; return v_out_tt; end;

34 of 38 Cost of MULTISET – Setup(2) create function f_seachtestunion_tt (i_type1_tx varchar2, i_type2_tx varchar2) return test_tab_nt is v_type1_tt test_tab_nt; v_type2_tt test_tab_nt; v_out_tt test_tab_nt; begin select test_tab_ot(owner, object_name, object_id, object_type) bulk collect into v_type1_tt from test_tab where object_type = i_type1_tx; select test_tab_ot(owner, object_name, object_id, object_type) bulk collect into v_type2_tt from test_tab2 where object_type = i_type2_tx; v_out_tt:= v_type1_tt multiset union all v_type2_tt; return v_out_tt; end;

35 of 38 Cost of MULTISET – Run -- Run1: Union of SQL queries SQL> select max(object_id), min(object_id), count(*) 2 from (select * from test_tab 3 where object_type = 'SYNONYM' 4 union all 5 select * from test_tab2 6 where object_type = 'JAVA CLASS'); -- Run 2: UNION of collections SQL> select max(object_id), min(object_id), count(*) 2 from (select * 3 from table(f_seachTestTab_tt('SYNONYM')) 4 union all 5 select * 6 from table(f_seachTestTab2_tt('JAVA CLASS'))); -- Run 3: UNION inside PL/SQL SQL> select max(object_id), min(object_id), count(*) 2 from table 3 (f_seachTestUnion_tt('SYNONYM','JAVA CLASS'));|

36 of 38 Cost of MULTISET - Analysis  Results:  Summary:  Collection operations cause significant memory overhead and slowdown.  Conclusions:  With a small data set, you can trade some performance for maintainability.  With large data sets, you should stay with SQL as long as you can and switch to PL/SQL only when you don’t have any other choice. TimeMax PGA Union of SQL0.051’564’840 Union of collections0.5514’409’896 MULTISET union0.6840’231’080

37 of 38 Summary  You must think in sets when integrating SQL into PL/SQL.  Otherwise, your solutions would never scale.  Always keep the overhead cost of object collection memory management in mind.  The most effective bulk size depends upon your database configuration  … but is usually between 100 and 10,000

38 of 38 Contact Information  Michael Rosenblum –  Blog – wonderingmisha.blogspot.com  Website – Already awailable: PL/SQL Performance Tuning