Download presentation
Presentation is loading. Please wait.
Published byGeraldine Bennett Modified over 10 years ago
1
A Holistic Approach To Performance Tuning Oracle Applications Release 11 and 11i
2
Andy Tremayne Applications Performance Group
3
Agenda The Methodology – Problem Definition – Project Documentation Summary (PDS) – Benchmark Information Tuning The Technology Stacks – The Client – The Middle Tier – The Database – The Server – The SQL Access Paths – The Network Tuning Applications
4
The Methodology Problem definition methods – Necessarily abstract and complex – Deployment requires specialist skills This methodology is – Very simple, fast and generic – Based on best practice and real world approach The systematic approach – Focuses investigative attention and the tuning effort – Enables early identification of resources needed – You do not need to be an expert
5
Three Key Areas 1. Accurately define the problem – Define the problems and their characteristics – Key to identifying the source of performance issues – Focuses tuning effort 2. Define and agree performance target levels – To support the critical business functions – To provide the definition of success 3. Understand the system performance factors
6
– Characterise the problem – What is the nature of the problem? – Ensure you fully understand – “Slow database” is too ambiguous The Problem Definition Stages
7
– Specify the locations – Use converse questions – List differences between them – Compare data routes, middle tiers... The Problem Definition Stages
8
– Note the times of day / dates – Link to events e.g. business cycle – Identify origin of the problem – Identify if a problem when isolated The Problem Definition Stages
9
– How many users are affected? – How much functionality? – How many applications? – Look for problem commonality The Problem Definition Stages
10
– Define the relative importance – When does it need to be fixed by? – Is there a viable workaround? – Consider the impact to the business The Problem Definition Stages
11
– Define the problem quickly – Define the problem accurately – Identify the appropriate resources – Solve the problem quickly
12
The “When” Stage Checklist When does the problem occur? – Identify an underlying trend. Correlate with system events and changes … Ask the questions: – When does the problem occur? – Has it always been slow? – Did it change suddenly? – Is it slow at all the time? – Is it slow when the system is heavily used? – Is it getting slower?
13
The “When” Stage Flowchart
14
Project Documentation Summary (PDS) The PDS – Structures and summarizes the salient performance indicators – Spans the entire system – Identifies areas to monitor – Ensures that the investigative focus remains impartial
15
Project Documentation Summary Documents – Oracle and Customer Contacts – Problem Definition – System Environment – Oracle and Applications Environments – Web/Forms Server Environments – Client and Network – Process Information Project Documentation Summary It should only take 3 hours to complete
16
PDS-Summary
20
PDS-Process Information The PDS provides a discussion document – It breaks down technical and political barriers – It helps build a business case and justify change When will your next emergency occur? What is your strategy?
21
Agenda The Methodology – Problem Definition – Project Documentation Summary (PDS) – Benchmark Information Tuning The Technology Stacks – The Client – The Middle Tier – The Database – The Server – The SQL Access Paths – The Network Tuning Applications
22
Release 11i Browser Benchmarks GX100, 500MHz Celeron 128MB JInitiator 1.1.8.16 Win 95/98 SEWin NT SP5Win 2000 SP1Win XP IE 5.5 Net 4.7 IE 6 IE 5.5 Net 4.7 IE 6 IE 5.5 Net 4.7 IE 6 IE 5.5 Net 4.7 IE 6 Applications Startup (Open) 28231920 182019 20 (PO) Purchase Order W/B (Open) 4.543.83.54 453.73.53 (AP Invoice Entry W/B (Open) 8.7788.576.7786.87 SA Users (Open) 3.5333.23334333.6 Concurrent Requests (Find All) 21.721.81.61.41.821.51.61.5 IE 6 appears consistently best (except 1 or 2 results) – All results for Windows NT, 2000 and XP are within 3% – Always perform your own tests! Other benchmarks have shown Windows NT is faster with 300MHz and higher
23
PC Speed vs Latency Benchmark Latency6ms 300ms1400ms 133MHz Win 95 48MB 233MHz Win 95 48MB 300MHz Win NT 128MB 400MHz Win NT 128MB Latency6ms 300ms1400ms 133MHz Win 95 48MB 233MHz Win 95 48MB 300MHz Win NT 128MB 400MHz Win NT 128MB 66 s67.7 s80 s 30 s36.5 s53 s 25.5 s29.4 s35 s 21.4 s26.5 s35 s 66 s67.7 s80 s 30 s36.5 s53 s 25.5 s29.4 s35 s 21.4 s26.5 s35 s CPU speed compensates for high-latency situations – A little dated now – JInitiator provides better times – Timings are very comparable with the browser benchmark
24
Floating menu bars – Frequently ‘polled’ wasting usually 10% CPU – 300MHz becomes a 266MHz …. – Keyboard shortcuts are a quicker, easier alternative Generic PC Tuning - Still very common! Windows screen savers – Use up to 8MB of memory Video – Up-to-date drivers can provide 20% improvement – Using 256 colors saves 0.5 - 0.75 sec opening a form Paper contains a complete tuning checklist
25
Benchmark and compare with targets – If targets are achieved – Move the PC away from the server, 1 hop at a time Database Server Application Server(s) Switch LAN Tuned Client What Next?
26
Agenda The Methodology – Problem Definition – Project Documentation Summary (PDS) – Benchmark Information Tuning The Technology Stacks – The Client – The Middle Tier – The Database – The Server – The SQL Access Paths – The Network Tuning Applications
27
Middle Tier Profiles Middle Tier Server profile – Memory is greater concern than CPU Database Server profile – I/O and CPU are more of a concern than memory Separate machines – Scalability – Simpler profile management Keeping multiple forms open on a poorly tuned system only exacerbates memory problems, slowing response times even further
28
Java Virtual Machine Questions How many JVMs and how many users per JVM? – Between 20 and 50 “active” users per JVM Hundreds of concurrent users Depends on CPU speed, Applications mix, the users – Perform a saturation test for your particular Applications mix and business How much Memory? – -ms128m and -mx256m ……up to 400Mb – Set soft limit to 80% of the hard limit – set "ulimit -n 1024" for the Apache and JServ processes
29
Optimizing Apache Current default for Apache is HTTP 1.1 – “keep alive” session feature in 1.1 – Messages are sent between the browser and the server Performance boost for Internet Explorer web users – Force Apache to use HTTP v1.0 – 26 to 5 seconds when opening some dialog boxes – In the httpd.conf file: – BrowserMatch "MSIE 5\.[^;]*; " nokeepalive downgrade-1.0 force-response-1.0 Always test and verify changes yourself
30
Client, Network, DB or Forms Server TimingsMost Time Performance Collector (Forms 6i) – Specify “record=performance” Analyze output using f60parse.pl – Four main screens FRD/Performance Collector
31
Agenda The Methodology – Problem Definition – Project Documentation Summary (PDS) – Benchmark Information Tuning The Technology Stacks – The Client – The Middle Tier – The Database – The Server – The SQL Access Paths – The Network Tuning Applications
32
Getting Things Right for 11i 8KB block size is strongly recommended for 8i/11i – Don’t use 4KB unless constrained by platform! – Increase performance by 40% for batch programs and large data-sort queries 4-5% for light data-sort queries Remove extraneous settings and events from the database initialization files!
33
Statspack - Introduction Introduced in 8.1.6 – Captures performance for new DB features – Summary page – Improved drill down – Captures high-load and some literal SQL – Backports available (unsupported) “Diagnosing Performance With Statspack” papers – http://technet.oracle.com/deploy/performance/content.htm – Results are stored in tables – Each snap is Identified by a Snapshot_id
34
Statspack - With Applications Always use the 8.1.7 version – Space management and report Use special versions that include the module name – No module name denotes custom code – MetaLink Note No. 153507.1 – Use Snap Level 5 Oracle 9i – New Snap Level 6 collects execution plans Use modified parameters with Applications One hour is the best unit of work
35
Statspack – Load Profile Load ProfilePer SecondPer Transaction Redo size:234,110.1415,400.80 Logical reads:194,977.7712,826.50 Block changes1,413.8093.01 Physical reads:3,546.96233.33 Physical writes:84.185.54 User calls:492.0832.37 Parses:207.8813.68 Hard parses:4.660.31 Sorts:293.5619.31 Logons:1.170.08 Executes:1,300.3685.54 Transactions:15.20
36
Statspack – High Load SQL Buffer Gets Executions Gets per Exec % Total Hash Value Module --------------- ------------ -------------- ------- ------------ ------ 35,087,907 43 815,997.8 5.0 2683031174 WSHRDPAK SELECT MSI.SEGMENT1 c_item_flex, det. inventory_item_id c_inv _item_id, msi. description c_item_description, det. customer _item_id c_customer_item_id, det. source_header_number c_so_nu mber, det. source_line_number c_so_line_number, det. cust_po _number c_po_number, sum ( round ( nvl ( det. requested_quanti Module Name
37
NEVER ANALYZE the SYS schema!!! Gathering Statistics Release 11i - only ever use Gather Schema Statistics – 10% sample Prior to Release 11i – Analyze using an adequate sample size Improves batch performance by around 40%, especially in AR, PA and concurrent processing. – Inadequate sample sizes may reduce performance by up to 40%.
38
Do pin – A Core set of SYS and FND packages – A core set for the products you are using – Any large (> 50KB) constantly aged-out packages Size and number of executions in V$db_object_cache Monitor X$KSMLRU – The no. of objects (KSMLRNUM) – Displaced packages and PL/SQL Package Pinning Strategy Do not pin – Small infrequently used packages – Those used by long-running batch processes or reports
39
Literal SQL Literal SQL – Select …. From …. Where Order No = 123456 Literal SQL is fine for batch processing – Statements are run infrequently – Enables the CBO to accurately cost the use of an index When statements use large amounts of shared memory e.g. 1MB. 100 statements = 100MB Literal SQL cannot be shared – It severely limits OLTP scalability and throughput – Concern is with large data sets e.g. order_no – Fix by converting literal SQL to use bind variables
40
Agenda The Methodology – Problem Definition – Project Documentation Summary (PDS) – Benchmark Information Tuning The Technology Stacks – The Client – The Middle Tier – The Database – The Server – The SQL Access Paths – The Network Tuning Applications
41
Consider using raw devices only if disk I/O is the only remaining performance bottleneck and cannot be resolved. BUT – Database export and import – Eradicates row chaining; rebuilds and balances indexes Raw Partitions Raw devices are required for OPS and RAC – Generally much more difficult to administer In theory, raw device buffering improves disk I/O – Some conversions from UNIX to raw devices – Have improved performance by 10%-15%
42
Stripe Size (simplified) Oracle Operating System 64KB Stripe Size 64KB 1 x 64KB Disk Read Benefit depends on amount of I/O. Use Statspack I/O figures.
43
Measuring Disk I/O When using disk arrays – Operating system utilities are limited – Sometimes need specialist software Only the Production instance should be running on the Production server Instead use the Oracle figures – This is the time Oracle sees for an I/O – FileStat figures in the UtlEstat or StatsPack report – <20ms read <30ms write (max!) non striped – <10ms on a striped disk array
44
Agenda The Methodology – Problem Definition – Project Documentation Summary (PDS) – Benchmark Information Tuning The Technology Stacks – The Client – The Middle Tier – The Database – The Server – The SQL Access Paths – The Network Tuning Applications
45
A Stuck Performance Issue? Always keep the raw trace file! Does the total = real world time? +
46
Mapping A User Session Sign–On Auditing - set at User level or higher – Records each time a person signs on to an application Applications help screen contains: – AUDSID (maps to V$SESSION.AUDSID) select U.USER_NAME from FND_USER U, FND_LOGINS L, V$SESSION V where U.USER_ID = L.USER_ID and L.SPID = V.PROCESS and V.AUDSID = &1; USER_NAME ----------------------------------- OPERATIONS
47
System: Initialization SQL Statement – custom profile option 'ALTER SESSION SET EVENTS ='||''''||' 10046 TRACE NAME CONTEXT FOREVER, LEVEL 4 '||'''' LevelIncluded Information null or 1Same as standard SQL Trace functionality 4As level 1 + bind variable information 8As level 1 + session wait information 12As level 1 + bind variable and session wait information LevelIncluded Information null or 1Same as standard SQL Trace functionality 4As level 1 + bind variable information 8As level 1 + session wait information 12As level 1 + bind variable and session wait information Using Event 10046 Enable Trace for a forms session or report, within a stored procedure or in a concurrent program – Set timed_statistics = true – Set event 10046 for more advanced troubleshooting
48
Agenda The Methodology – Problem Definition – Project Documentation Summary (PDS) – Benchmark Information Tuning The Technology Stacks – The Client – The Middle Tier – The Database – The Server – The SQL Access Paths – The Network Tuning Applications
49
Latency – Delay between the instant a request is made for data and the transfer starts – Influenced by both device and link latency Bandwidth and Latency Bandwidth – Amount of data that the network can transfer – Size of the pipe/how many packets at once
50
Create a detailed network diagram including: – The location and number of all Oracle users – Every device from the client to the database server – The bandwidth and latency of all links and devices Use a full size packet: ping -l1472 -n50 Understanding the Network To identify a performance problem – You need only basic knowledge Note! TraceRoute shows the optimal route - which is not necessarily the actual route.
51
Bandwidth 02468 Heavy NCA-11i 10 SC Normal Low Heavy Normal Low 4 - 8 4 - 6 2 - 4 2 - 3 2 - 6 0 - 2 Type of User Kilobits Per Second Avg. 4.8 Avg. 2.4 Avg. 1.2
52
Isn’t Bandwidth Enough? Mission-critical applications suffer while less important traffic can dominate your network – Large downloads from corporate and external web sites – Email synchronization – Downloading large email attachments – MP3 uploads and downloads – RealPlayer and other streaming traffic – Global Single Instance Some customers have doubled link capacity only to find that Application performance problems persist! WAN links continue to be problematic because of their cost and relatively low bandwidth
53
Quality Of Service (QoS) QoS - A set of features (and tools) – Usually implemented in routers – Classifies traffic enabling differentiated service levels Four Main Policies – All concerned with congestion management Packet Classification Packet Shaping Rate Limiting Priority Queuing
54
Classification (Coloring) & Rate Limiting Engine Traffic Queues Shaping and Rate Limiting Engine High Normal Low Medium Queue Bypass Transmission Queue Input Queue QoS Traffic Control
55
Mission Critical Applications Email Other Traffic Internet Browsing Shaped Traffic Mission Critical Applications Email Other Traffic Internet Browsing Uncontrolled Traffic WAN/Internet Network Traffic Shaping
56
Agenda The Methodology – Problem Definition – Project Documentation Summary (PDS) – Benchmark Information Tuning The Technology Stacks – The Client – The Middle Tier – The Database – The Server – The SQL Access Paths – The Network Tuning Applications
57
Concurrent Manager Tuning Many now have up to 100,000 requests – Check your top 10 SQL statements – Run purge - not just fnd_concurrent_requests! If you are tight on CPU – 50% of tuning is in the business – A strategy will mean you only need 20/25 managers – Watch the sleep times Separate the log and out directories to reduce contention. There are several other recommendations in the paper.
58
Enhancing The Concurrent Manager Paper 164085.1 – Moving sensitive report files to secure directories – Compressing reports and distributing them during off- peak periods over Wide Area Network links – Automatically faxing reports and orders – Converting documents to PDF With and without Adobe Automatically printing from UNIX – Automatically emailing reports or documents – Automatically archiving selected requests before purging
59
Enhancing The Concurrent Manager
63
Embedding SQL in UNIX Scripts unixvar=`sqlplus -s apps/test @<<! set termout off set feedback off set pagesize 0 select R.REQUEST_ID from FND_CONCURRENT_PROGRAMS_VL P, FND_CONCURRENT_REQUESTS R where P.CONCURRENT_PROGRAM_ID = R.CONCURRENT_PROGRAM_ID and P.USER_CONCURRENT_PROGRAM_NAME = 'Active Users' and R.PHASE_CODE = 'C' and R.STATUS_CODE = 'C'; exit !` echo $unixvar …. Reads down to the ! Suppresses SQL “overlay” Suppresses SQL login Replace With $FCP_LOGIN Use for loop
64
Clever Queries OperatorDescriptionComment = xyzEquals xyz xyz can be a number, word, or date enclosed in single quotes != xyzNot Equal to xyz As Above < xyzLess than xyz As Above > xyzGreater Than xyz As Above like xyzSimilar to xyz like ‘xyz’ may contain the _ wildcard or % Between x and y a and b may be numbers, words or dates in (x,y,z,...)exists in list As Above is nullis empty for example printed_date is null
65
AR1020... Clever Queries
70
Clever queries may be extended – Using :a, :b, :c ……. – :a like ‘%’ and PRINTED_DATE is null – :a like ‘%’ and POSTED_FLAG = ‘Y’ – :a like ‘%’ and ATTRIBUTE1 = ‘Special Item’ – :a like ‘%’ and VENDOR_ID in (select VENDOR_ID from PO_VENDORS where HOLD_FLAG = 'Y') Integrating with form folders is a very powerful technique Reduces the need for customizations
71
In Summary… The holistic approach – Provides a simple, fast approach – Key to identifying the source of performance issues – Focuses tuning effort Throughout the tuning exercise – Make a change and measure the affect – Investigate every area – Manage the server load at all times Although tuning is a science it also involves common sense and sometimes, a little ingenuity.
72
Questions – Catch me! For More Information Holistic Paper – 84 pages: 69565.1 – PDS: 165300.1 – Concurrent Paper: 164085.1 Tuning Handbook: – ISBN 0-07-212549-7
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.