Presentation is loading. Please wait.

Presentation is loading. Please wait.

AS/400 Performance Tuning

Similar presentations


Presentation on theme: "AS/400 Performance Tuning"— Presentation transcript:

1 AS/400 Performance Tuning
(Session S400 04) Ron Schmerbauch IBM

2 Session Preview How do I know when there's a performance problem?
What can I do about it? General performance overview of your AS/400 General performance overview of your R/3 system Where do I look for DB performance info? How can I improve DB performance?

3 Session Focus - DB2 for AS/400
All work Specific SQL Request Complex SQL Request Specific SQL Request All work DB performance influences and is influenced by your system as a whole Performance needs to be viewed from different perspectives

4 Session Focus - DB2 for AS/400
All work Specific SQL Request Specific SQL Request All work Complex SQL Request Performance is a moving target Many factors to consider Analysis and tuning is a mixture of art and science

5 How do I know when there's a performance problem?
Other than the user complaints..... You will see a difference in your performance data Checking the "good" days is just as important as checking the "bad" days Every system is different, only your experience and historical data will be your best reference

6 What actions might I take?
Here are the options... Get more computing resources Move workload to a different time or day Adjust use of computing resources Change the workload, or influence the implementation Change the performance expectations

7 How do I decide which action to take?
Analyze the situation using performance tools Rely on history and experience Experiment More than one action may be necessary

8 Session Focus - DB2 for AS/400
All work Specific SQL Request Specific SQL Request All work Complex SQL Request Look at the big picture first Understand the high level tools and measurements

9 Performance tuning - general tips
Scope the problem - is everything slow or is it something specific? Keep records of what and when you change things Adjust one piece at a time Dynamic environments make tuning difficult Test and production environments may respond differently to "the same" thing Tuning is an iterative process

10 Performance tuning - general tips
History indicates most performance problems are caused by: Locally defined tables and reports Poor queries (often using SELECT *, LIKE %) Queries on tables with no index Third party applications not factored in R/3 sizing sharing R/3 memory pool Basic work management errors Suspect these areas first!

11 Checking AS/400 performance
Examine computing resources - CPU, Memory, Disk Use system commands or ST06 Work with system status(WRKSYSSTS) Work with disk status (WRKDSKSTS) Work system activity (WRKSYSACT) or Work Active Job (WRKACTJOB)

12 WRKSYSSTS

13 WRKSYSSTS Check CPU % Maximize memory pool size, set Paging Option to *CALC The lower the DB Faults the better, don't worry about paging numbers much Make "Max Act" as low as possible, but high enough to keep Wait->Inel and Act->Inel at 0. Set Sysval QPFRADJ=0 once you settle on values

14 WRKDSKSTS

15 WRKDSKSTS Look at % utilization and % used
Utilization should be no more than 40% over time. % used should be roughly equal across arms, use STRASPBAL or SAV/DLT/RSTLIB to respread data evenly.

16 WRKSYSACT

17 WRKSYSACT Check most active jobs Look for interactive jobs
CFINTnn tasks taking up significant CPU

18 Checking general performance of your R/3 system
ST03 - Workload ST02 - Buffers ST04 - DB ST06 - Operating system SM50 - Work processes SM51 - Servers

19 Checking DB performance
All work Specific SQL Request Specific SQL Request All work Complex SQL Request Use ST04 and/or SM50 to find the report and query Use ST04, ST05 and OS/400 commands for detailed analysis

20 ST04

21 ST04 - Detailed Analysis

22 Where does ST04 data come from?
Memory-based DB Monitor summarizes info on a statement basis and collects the information in memory Scheduled SAP job calls OS/400 API to dump data to files in R3<SID>400 library ST04 reads the files See 4.6 SAP Documentation CD Basis Components --> CCMS --> CCMS --> CCMS Monitoring --> Database Monitor

23 ST Slowest Queries

24 ST05

25 ST05 - trace output

26 ST05 - Explain SQL

27 What are these things and where do they come from?
Prepared Statement Explain Host Variable Access path Access plan Reusable ODP

28 SAP 2-tier central server WP job
SAP 3-tier app server wp XDAEDRS Network SAP 2-tier central server WP job XDAEDRS XDARECVR High level picture of DB architecture and where the optimization occurs Static Dynamic Extended Dynamic DB2 UDB for AS/400 Architecture Compiled embedded statements Prepare every time Prepare once and then reference Native (Record I/O) SQL Optimizer DB2 UDB (Data Storage & Management)

29 Has this Dynamic request been previously executed?
Extended Dynamic SQL View Dynamic SQL statement SQL Package (*SQLPKG) Has this Dynamic request been previously executed? Access Plan Each Dynamic SQL PREPARE is -Parsed -Validated for syntax -Optimized as access plan created for the statement Generic plan quickly generated on Prepare Complete, optimized plan on Execute/Open

30 SQL Explain vs. SQL packages
ST05 report tells you the library, package and statement SQL explain shows you slightly different info than what is included in the SQL package SQL Package Contents: Statement name Statement text Statement parse tree Access Plan PRTSQLINF output STATEMENT NAME: OlBAAAEYCA SELECT "BIASSTEP" , "NOCOL" FROM R3TSTDATA/"SSQLR" WHERE "REP_NAME" = ? WITH UR SQL4021 Access plan last saved on 05/16/00 at 11:20:26. SQL4020 Estimated query run time is 1 seconds. SQL402D Query attributes overridden from query options file QAQQINI in library QUSRSYS. SQL4027 Access plan was saved with DB2 UDB Symmetric Multiprocessing installed on the system. SQL4017 Host variables implemented as reusable ODP. SQL4008 Index SSQLR used for table 1. SQL4011 Index scan-key row positioning used on table 1.

31 Access Plan to ODP Create process is EXPENSIVE
Executable code for all requested I/O operations Internal Structures OPEN DATA PATH (ODP) ACCESS PLAN CREATE Talk about "warm-up" effect of applications (might be needed for benchmarks) Also talk about the fact that that you can't pre-open ODP's with SQL. We'll talk about how DB2 UDB leaves ODP open and what events/situations force the ODP's to close causing expensive full opens Create process is EXPENSIVE Longer execution time the first time an SQL statement is executed Emphasizes the need of REUSABLE ODPs

32 Reusable ODPs To minimize the number of ODPs that have to be created, DB2 UDB leaves the ODP open and reuses the ODP if the statement is run again in job (if possible) Reusable ODPs consume 10 to 20 times less CPU resources than a new ODP Two executions of statement needed to establish reuse pattern DB2 UDB tries to minimize the number of ODP creations ("full opens) by reusing existing ODP's as much as possible within a job. Creation of an ODP is 10 to 20 times in terms of CPU than reusing an existing ODP. Reuse of ODP's is only of a benefit when the same statement is executed multiple times within a job. That's why DB2 UDB doesn't start reusing ODPs until after the second execution of the statement. There is a PTF now available to enable Reusable ODP mode after the first execution. Details on the PTF are covered later on. There are certain ODP's that cannot be reused due to their particular implementation.

33 ST05 - trace output ODP reused

34 Reuse Considerations Reusable ODP's do have one shortcoming... once reuse mode has started access plan is NOT rebuilt when the environment changes What if a table that started out empty and is now 5X in size since the last execution? What if selectivity of host variable or parameter marker is greatly different on nth execution of a statement? What if an index is added for tuning after 5th execution of statement in the job? Need to be aware that reusable ODP's do have one drawback - once they're in reusable mode, DB2 UDB is unable to react to environment changes (table size increase, new index, etc) and rebuild the associated access plan. Reusable ODP's do get share locks on the associated database objects, so that columns and indexes cannot be deleted once in reusable ODP mode

35 Check for already prepared stmt
ST05 - trace output Check for already prepared stmt Why no check here?

36 DBSL statement caching
Dynamic (table or field name variables in abap) statements are checked against a cache in DBSL. Cache misses need to be prepared as usual - into lib R3<SID>X0000, pkg <tablename> Cache hits are reused, no need to check or prepare another statement

37 ST05 - Explain SQL

38 How are all of the access plan decisions made?
Now you know... Prepared Statement - SQL package Explain - reasons behind the plan Host Variable - "fill in the blank" for selection Access path - fancy name for an index Reusable ODP - faster data access How are all of the access plan decisions made?

39 Optimization...the intersection of various factors
Server attributes Server configuration Version/Release/Modification Level Server performance The Plan SMP Database design Job, Query attributes SQL Request Table sizes, number of rows Static Dynamic Extended Dynamic Interfaces Views and Indexes (Radix, EVI) Work management

40 What data access methods does the optimizer have to consider?
Non-Keyed Data Access Methods Table Scan Parallel Table Scan Parallel Pre-fetch Parallel Table Pre-load Skip Sequential with dynamic bitmap Parallel Skip Sequential

41 These are the most common for R/3
Keyed Data Access Methods Key Positioning and Parallel Key Positioning Dynamic Bitmaps / Index ANDing ORing Key Selection and Parallel Key Selection Index-From-Index Index-Only Access Parallel Index Pre-load

42 More complex queries, also common in R/3
Joining, Grouping, Ordering Nested Loop Join Hash Join Index Grouping Hash Grouping Index Ordering Sort

43 Data Access Methods Few Many Response Time / System Resources Keyed
Skip Sequential Table Scan Few Many

44 Key Positioning Selection criteria are applied to ranges of index entries before the table is processed. Advantages: Only those index entries that are within a selected range are processed Can process both join and selection processing within a single operation if the correct index exists Potential disadvantages: Can perform poorly when a large number of rows are selected Used when: Less than ~20% of the keys are selected Ordering, grouping, or join operation requires the use of an index Selection columns match the leading ix key fields Since there at least two I/Os for every record selected the cost for retrieving a large number of records is cheaper through a table scan since the number of I/Os will be reduced. This is known affectionately as Multi-key Frogger (MKF) internally. Equivalent to RPG SETLL or COBOL START.

45 Creating the Perfect Index
A "perfect" index is a radix index that is permanent and can provide: Good, useful statistics to the optimizer Index contains appropriate selection, joining, grouping, ordering fields Applicable key fields are contiguous Equal predicate fields first, one non-equal predicate field last

46 Art - The Perfect Index Early elimination of rows is the key
CREATE INDEX X1 ON EMPLOYEE (LASTNAME, WORKDEPT, SALARY) : SELECT * FROM EMPLOYEE WHERE WORKDEPT BETWEEN 'A01' AND 'E01' AND LASTNAME IN ('SMITH', 'JONES', 'PETERSON') AND SALARY BETWEEN AND LASTNAME WORKDEPT SALARY ... ... ... ... ... ... Jones A01 35000 Jones C01 51000 Jones D01 45000 ... ... Peterson C01 60000 Think of processing a set of ranges... JonesA JonesE PetersonA PetersonE SmithA SmithE Since there at least two I/Os for every record selected the cost for retrieving a large number of records is cheaper through a table scan since the number of I/Os will be reduced. This is known as ??? when you are looking at the explain output for any of the other DB2 family members. This is known affectionately as Multi-key Frogger (MKF) internally. Peterson E01 100000 Peterson E01 120000 ... ... Smith B01 47000 Smith C01 59000 Smith F01 62000 Early elimination of rows is the key Narrow range(s) ... ... ... ...

47 Art - The Perfect Index LASTNAME WORKDEPT SALARY ... ... ... ... ...
CREATE INDEX X1 ON EMPLOYEE (LASTNAME, WORKDEPT, SALARY) : SELECT * FROM EMPLOYEE WHERE WORKDEPT BETWEEN 'A01' AND 'E01' AND LASTNAME LIKE 'PETER%' AND SALARY BETWEEN AND LASTNAME WORKDEPT SALARY ... ... ... ... ... ... Jones A01 35000 Jones C01 51000 Jones D01 45000 ... ... Peterson C01 60000 Since there at least two I/Os for every record selected the cost for retrieving a large number of records is cheaper through a table scan since the number of I/Os will be reduced. This is known as ??? when you are looking at the explain output for any of the other DB2 family members. This is known affectionately as Multi-key Frogger (MKF) internally. Think of processing a set of ranges... Peter(-infinity) - Peter(+infinity) With this index, WORKDEPT and SALARY are of no use. Peterson E01 100000 Peterson E01 120000 ... ... Smith B01 47000 Smith C01 59000 Smith F01 62000 ... ... ... ...

48 "Perfect" radix indexes... Statistics Multi key selection Index only access Nested loop join Index grouping Index ordering Selection, grouping, ordering CREATE INDEX IX1 on TABLE1 (YEAR, MONTH, CUSTOMER, ORDERNO) CREATE INDEX IX2 on TABLE2 (ORDERNO, QUANTITY, SALES_AMOUNT) SELECT A.YEAR, A.MONTH, A.CUSTOMER, SUM(B.QUANTITY), SUM(B.SALES_AMOUNT) FROM TABLE1 A, TABLE2 B WHERE A.YEAR = 1998 and A.MONTH in (10, 11, 12) and A.CUSTOMER = 'SMITH' and A.ORDERNO = B.ORDERNO GROUP BY A.YEAR, A.MONTH, A.CUSTOMER ORDER BY A.YEAR, A.MONTH, A.CUSTOMER Joining

49 Join Optimization Tips
Remember that there is no such thing as a hard and fast optimization rule since the optimizer is data dependent. For every "rule of thumb" that you learn, there will always be at least one exception. The single most important thing that will effect the performance of join queries is the Join Order.

50 Join Optimization Tips
At a minimum, make sure there are radix indexes built over all the join columns. May have indexes built over both join columns and selection columns, this allows for multi-key joins. Create multiple, single key indexes over selection columns to take advantage of dynamic bitmaps, then create radix index over join columns. Create multiple, single key EVIs over foreign key columns to take advantage of dynamic bitmaps and star schema join. Indexes are used to determine the average number of duplicate values for the join fields. EVIs are not used for joins in V4R3 and are never used for nested loop joins. Avg dup stats available for first 4 contiguous key columns only. Avg. dup stats from single key EVI

51 Indexing Strategies - Basic Approach
You must create some indexes Statistics Implementation In general: equal selection fields first, then join fields -or- group-by and order-by fields Be aware of limitations when creating variable length and null capable primary and foreign key columns May have to play around with key order based on the queries, the data and selectivity of the columns EVI created over single key with few distinct values results in a smaller symbol table and smaller vector. The use of this EVI is quicker and more efficient.

52 Indexing Strategies - More details
Consider Index Only Access All columns in the SELECT clause as keys Consider dynamic bitmaps and index ANDing/ORing Simple indexes can be combined together for selection Consider EVIs for stats, dynamic bitmaps, and star schema join Single key, low number of unique values Fact table foreign key Over temporary results file to provide stats Check for messages and iterate

53 Creating Indexes Do not create lots of permanent indexes trying to cover every combination Create one or two you feel will be good ones, run job again (or run STRSQL for single query) and see if they are used If they are used and run time is noticeably helped, consider them for permanent use If they aren't used, delete them and try a different combination Don't create indexes just to solve single instance of full open or query unless it accounts for a significant amount of time

54 What does another index cost?
Each additional index created for a table will cause overhead when: Updates to the table include the index keys Rows are inserted or deleted for the table Full opens occur for that file (index evaluation) Extra disk space Extra save, restore time

55 Optimization...the intersection of various factors
Server attributes Server configuration Version/Release/Modification Level Server performance The Plan SMP Database design Job, Query attributes SQL Request Table sizes, number of rows Static Dynamic Extended Dynamic Interfaces Views and Indexes (Radix, EVI) Work management

56 When should I delete packages?
When factors influencing the optimizer have changed significantly CPU/Memory changes DB PTFs Big changes in database...client copy, client delete, language import After R/3 install, before and after upgrades Selected packages for new indexes Don't forget...your system will need to build them again Safest to delete when R/3 is down

57 What is the DLTOLDPKGS job?
One other reason that you would deleted packages....application changes Runs at STARTSAP time Attempts to improve performance by deleting packages related to ABAPs that have changed since the package was built.

58 Deleted records and performance
Space for deleted rows is not reclaimed automatically Deleted rows are reused by new inserts Random "holes" can increase paging Use RGZPFM to compress the tables Check DB02 - Deleted row analysis Deleted byte count may be wrong for tables with variable length fields

59 DB02 - Deleted Records

60 Misc... Memory... if you have it, the system will use it
Separate memory pools as appropriate Test and tune queries before going live Team up with power users to understand their queries Set realistic response time expectations Test applications and queries using the "real" interface Interactive SQL on Server models Use Operations Navigator, RUNSQLSTM or Query Manager in batch instead

61 Review - What actions might I take to improve performance?
Get more computing resources Move workload to a different time or day Adjust use of computing resources Change the workload, or influence the implementation Change the performance expectations

62 Work with the system and the optimizer
Optimizer version - DB ptfs Server attributes Server configuration Version/Release/Modification Level Server performance Hardware issues The Plan SMP Database design Job, Query attributes CHGJOB, CHGQRYA, QAQQINI file SQL Request Table sizes, number of rows Select *, LIKE '%' DB02 for sizes, deleteds, indexes Static Dynamic Extended Dynamic Interfaces Views and Indexes (Radix, EVI) Work management WRKSYSSTS - pools, paging, maxact WRKACTJOB - cpu STRSQL/SQLUTIL plan may be different than from R/3

63 Reporting performance problems
Check CPU, memory, disk Provide Query, SQL package and/or explain Table stats size, num rows, num deleteds Any known changes to environment Prior performance, current performance What you have tried already

64 Additional Resources R/3 on AS/400 info, education, services DB2 Query Optimization Workshop Much of this session came directly from this class... Credits and thanks to Mike Cain and Kent Milligan of IBM PartnerWorld for Developers

65 Additional Resources DB2 UDB for AS/ IBM Redbook for SAP on AS/ SAP Basis courses BC370, BC525 IBM Pubs - DB2 for AS/400 SQL Programming DB2 for AS/400 SQL Reference DB2 for AS/400 Database Programming

66 Please complete your session evaluation and drop it in the
box on your way out. Be courteous — deposit your trash, and do not take the handouts for the following session. The SAP TechEd 2000 Staff


Download ppt "AS/400 Performance Tuning"

Similar presentations


Ads by Google