Download presentation
Presentation is loading. Please wait.
1
IBM Data Server Manager for z/OS
John Casey Senior Solutions Advisor
2
Thank you for joining us today to look at IBM’s offering Data Server Manager simple and smarter application performance tuning with IBM® Data Server Manager for z/OS Data Server Manager also known as (DSM) is a web-based integrated database management tools platform for DB2 for LUW and now on z/OS. The evolution of Data Server Manager into IBM DB2 tooling continues. DB2 for z/OS application developers can still rely on features and functions they've used and depended on in Data Studio. While DSM is designed for database administrators looking to manage DB2 servers. This webcast goes beyond the basics to look at the recent enhancements delivered in IBM DB2 performance tools that are realized in Data Server Manager (DSM) for z/OS. I’m going to kick this off talking about some of the challenges that have faced our customers. Next I’m going to do some level setting with a brief overview Data Server Manager. Then I will focus on application performance showing how you can easily identify and prioritize your top SQL statements displaying more information while using less system resources. From there I’m going to talk more specifically about how can evaluate large numbers of queries quickly to perform end-to-end Then I will go how you can now take performance tuning to the next level with host variable collection and selectivity override profiles. Finally, I’ll wrap up with how you can get DSM and additional Q&A
3
What’s the cost of application downtime to your business?
Minutes of Downtime x Average revenue per minute = Downtime Losses In the world today, many businesses never close. Stores that open their doors 24 hours a day, banks that serve customers through ATMs, global enterprises that operate across time zones and businesses that support customers and suppliers through the Web are increasing in number. Consequently, the cost of system downtime is skyrocketing and the avoidance of downtime is a critical concern. Downtime costs are enormous and growing. Contingency Planning Research, a management consulting firm, recently reported the following average hourly costs for various industries. in 24 x 365 operations, percent availability still results in four hours of downtime every year. That translates into an average annual cost of $276,000 to $26 million for these industries.
4
Performance tuning challenges across the organization
”I don’t have time to hone my SQL skills. I need to focus on developing core application functionality.” “It is very challenging to aggregate performance data across our complex data environment.” DBA Application Developer “I don’t understand why our developers aren’t focused on creating better performing SQL.” Now let’s look at the challenges in tuning queries across the organization. Writing good code and writing optimized SQL require different skills sets. Application developers they rely on frameworks to write the SQL for them. If they focus on capability delivery can mean that performance is a second order consideration if there’s time. Often the QA environment is inadequate to shake out scalability issues either from a workload perspective or from a data perspective. Experienced DBAs have the skill set to optimize SQL but, given their hectic schedules and numerous responsibilities and the disconnect we often see between application and database groups, they don’t often get a chance to focus on SQL optimization until problems arise. Once in production query tuning is just very challenging. Performance problems can appear out of the blue. Applications are running fine one day and not the next. What changed? The workload? The data characteristics? When the data changes (data volume, data distribution etc.), access paths can change too. Did the database design change for example, was an index added or dropped? Now DBAs are now responsible for things like Index Elimination strategy's To begin, you have to get information from a number of places often including the catalog, performance monitors, EXPLAIN data, etc. DBAs needs a deep understanding of database concepts to assure good database design and query design. They need to think like the optimizer detecting what predicates are most selective, whether the optimizer can tell whether they are most selective, or what are better options regarding filtering the data? Even with the best optimization engine in the world, query tuning can still be very complex and very time consuming. Tuning a single SQL statement is hard enough. Tuning a group of queries or a query workload is an order of magnitude more complex. Workloads typically contain thousands of SQL statements. Analysing each one of these statements is a lengthy, manual effort. After a review of each statement, the DBA is then required to consolidate the findings into something meaningful and apply a solution across the workload. IT Manager LOB Manager “Performance problems seem to appear without warning and deep technical skills are hard to find.” “We can’t adequately test for peak workload since we don’t have enough human or IT resources.” “I need to get my business results fast and accurate. What’s going on ?” QA Manager
5
Does this sound familiar?
SQL performance issues: Challenge to deliver applications to meet client and service-level objectives Organization under pressure to do more with less Advances in programming frameworks contribute to performance issues SQL tuning issues: Application performance is affecting the organization’s business Aggressive project delivery schedules and limited human and database resources have a negative impact on application performance Writing effective Java™ code and writing optimized SQL are different skill sets. Lack of focus on query optimization during development resulted added to performance issues Result: Service-level objectives were missed Business users experience unanticipated application and database outages. Customer satisfaction declines and revenue opportunities were lost Does this all sound too familiar? SQL Performance issues: Delivering applications with performance levels that meet both client expectations and service-level objectives was becoming more challenging Organization is under tremendous pressure to do more with less: fewer people, less time, less hardware and fewer software resources. Surprisingly, many of the advances in programming frameworks actually contribute to application performance degradation and increase security. SQL tuning issues: Application performance is affecting the organization’s business Aggressive project delivery schedules and limited human and database resources often have a negative impact on application performance. Writing effective Java™ code and writing optimized SQL are different skill sets. Application developers tend to have the former but not necessarily the latter, and increasingly rely on Java frameworks to write the SQL. Unfortunately, a lack of focus on query optimization during development can result in significant performance issues that can negatively impact daily business operations. Service-level objectives were missed; Business users experience unanticipated application and database outages. Customer satisfaction declines and revenue opportunities are lost
6
Reactive vs. Proactive performance management
Problems addressed after performance impact Measuring flashing light indicators Noticing either the very good or the very bad Takes longer to react to bad performance because of measuring lagging indicators Understanding what behavior is desired Measuring leading indicators Capture best practices and procedures Team responsible for creating the measurements understands the what and why One of the main objectives of an IT organization is to ensure that its infrastructure delivers the required performance to ensure that business objectives are continuously met in a constantly evolving and changing business environment. This requires the DBA to adopt a strategy that is both proactive and reactive to conditions and events that would tend to adversely impact IT systems. The proactive effort involves a number of tasks, including the following: Capacity planning of IT resources Choosing the most effective IT architecture for the current and anticipated workload Adopting best practices in application design, development, and deployment Performing rigorous regression testing prior to deployment in a production environment Performing routine monitoring of key performance indicators to forestall potential performance problems, as well as gather information for capacity planning The reactive effort involves having a well-defined methodology for identifying the root cause of a problem, and resolving the problem by applying best practices. In the following sections we introduce the concept of performance management, describe the different types of monitoring available, and discuss a typical methodology for effective performance problem determination in DB2 environments.
7
What should you look for in a performance management solution?
Cost reductions of DB2 and associated applications Faster identification and resolution Improved overall performance Replacing ad-hoc methods Faster DB2 and application migration DB2 provides lots of good performance monitoring and automatic capabilities, but this doesn’t tell the whole story. How about those applications which rely on DB2? How do you monitor and manage the performance of those applications? How can you track and trace queries into the database to ensure they are well tuned and running well? Does your DBA understand how to talk with your developers and application owners? Unfortunately, many organizations create homegrown tooling to collect diagnostic information and don’t view performance holistically. They have single purpose tools which are manually created and don’t scale with additional DB2 instances. Performance solution provide comprehensive, end to end performance monitoring and management. Together with DB2 they create a complete solution for managing performance that will allow you to be both proactive and reactive at the same time 7
8
. IBM DB2 Performance Management Solution provides:
Fast identification with automated alerts, proactive notification and 24x7 monitoring Tuning of queries and workloads proactively Expert advice with built-in advisors Diverse set of capabilities managed via Data Server Manager (DSM) Easy-to-use integrated view of overall DB2 performance management Seamless navigation and movement via functional capabilities versus individual products . DSM Based on your challenges in the business and performance the perfect solution is the IBM® DB2® Performance Solution Pack for z/OS it allows you View performance holistically. With a seamlessly integrated set of specialized performance tools that enables you to quickly go through the performance tuning lifecycle of collect, monitor, analyze, optimize, test and even manage your data prevent performance problems in your DB2 for z/OS environment. This easy to use intuitive interface provides the right tools in the right context to help you reduce CPU demands on your systems, increase IT productivity and meet service level agreements while allowing you to be proactive and reactive Collect, all the information you need to monitor, 24 x 7 with alerts OMEGAMON XE for DB2 Performance Expert on z/OS: collect, Monitor, analyze and tune the performance of IBM DB2 for z/OS and IBM DB2 applications. (system level) IBM DB2 Query Monitor for z/OS: Monitor, track and identify poorly performing SQL; customize and tune SQL and examine access to DB2 objects. Expert advise Query Workload Tuner for DB2 for z/OS: Tune SQL workloads, based on expert recommendations, to optimize application performance and reduce specialized skill requirements. The integration between monitoring and tuning functions minimizes resolution time in the most complex DB2 environments. If you currently rely on the skills of individual experts to resolve performance issues manually, you are vulnerable to single point of failure scenarios. DB2 Performance Solution Pack helps you to avoid these types of performance issues by providing expert diagnosis and tuning capabilities to all members of your technical staff, even those with minimal experience.
9
IBM Data Server Manager is one tool that consolidates all functionality from QWT, CMz, Data Studio Administration, utilities, autonomics and now end to end performance tuning & monitoring into one install and one repository. We also know that customers want to not just look at one database but be able to potentially manage 100’s of databases within their enterprise. By giving you one install and one repository in IBM Data Server Manager you get a simplified user experience. The GUI is web based which means that you don’t need to install a thick GUI but rather point your web browser to a web address and then you have all you need to work with the tool. For DB2 for z/OS we now support database administration, query tuning functionality, best practices configuration, and we are also integrating even more functionality with Management Console for z/OS, Query Monitor, Omegamon, Utilities Solution pack and Data Server Manager and trust me there is more to come. Yes, there is a Data Server Manager at no charge version IBM is updating the base edition of DSM every 3-6 months, and plenty of the new features are designed for those of us who work in z/OS environments. With the latest round of enhancements, IBM is delivering greater integration with additional core products such as DB2 Query Workload Tuner, and Query Monitor. There is continued focus on program development with the capability to write, test and tune SQL as well as stored procedures. > DSM continues to expand and provide a common web-based UI as seen with the integration with the Utilities Solution Pack. Features include: Customizable profiles for identifying object situations and generating resolving JCL (reorg, copy, runstats, etc.). Automatic prioritization of object situations and their resolving actions. The capability to define maintenance windows for enabling active autonomics, allowing DB2 to self-manage utility runs. Notification support ( , text, WTO) on selected autonomic events. Graphical trend analysis of historical RTS. Capture of utility history, recording utility output, time, duration, etc.
10
Data Server Manger – Where DBAs Spend Time
Administration helps you manage, and maintain complex database environments for increased productivity and optimized use of system resources Performance Tuning helps you develop and implement a performance strategy including providing expert recommendations to improve query workload performance Identifying Environment changes offers centralized management of database and client configuration Troubleshooting capture production application workloads then compare capture and enforce configuration settings As I talk to customer about their challenges, I have found that this is very representative of their experiences. There are 4 main tasks that take up the time: 1) Administration tasks 2) Identifying problems whether it is a performance problem or not 2) diagnosing the specific nature of the performance problem and 3) Actually resolving the performance problem typically doing some kind of tuning activity. 4) Doing root cause analysis of environment 5) implementing changes Here you can see that solving the problem typically relates application issues, but may also be related to database design, poor statistics, or database configuration or resources. Again, DSM directly help you resolve these issues: > Data Studio provides helps database developers and administrators manage, administer, and develop database environments for increased productivity and collaboration > Query Workload Tuner provides specific recommendations on query changes that can reduce disk accesses or improve query plans, on database changes such as indexes or materialized query tables, or on improved statistics collection. And Configuration Manager will help users track and manage database and client configurations, enforce compliance to standards, and get better use of storage associated with DB2 so lets look into these offerings in more detail…
11
Features of the Data Server Manager z/OS Based Tools - At A Glance
NO CHARGE Data Server Manager Base DB2 Performance Solution Pack DB2 Admin Solution Pack Configuration Manager for z/OS V5.1 Connect to DB2 for z/OS V10/ V11/V12 Database object navigation, viewing object detail, and linking to related objects. Database object dependency display. Data browsing and editing. Basic database object operations, such as creation of tables, indexes, constraints, and tablespaces; dropping of tables, indexes and constraints; altering tables. Show system privilege from the perspective of Group/User, Role, or SQL object. Choose: -"Group/User" to see the role and the relative object privilege for a user account; - "Role" to see the role a user account belongs to and its relative object privilege; - "SQL object" to see a specific object and users or roles that have the relative authority. Single query tuning Statistics Advisor Query Environment Capture Access Path Graph IBM Query Workload Tuner Launch of visual explain and tune query on the SQL editor Tuning wizard to capture SQL statements from multiple sources Tuning advisors provide recommendations for: Statistics Advisor Index Advisor IDAA Advisor Problem analysis of query or workload Access plan graph Query formatting and annotation Tuning Report Test Candidate Index Access Plan Comparison Index Impact Analysis Query and Workload Environment Capture Selectivity Override IBM DB2 Query Monitor Launching of DSM from Query Monitor Web UI for end to end performance analysis Host variable collection OMEGAMON XE for DB2 PE Key Performance Indicators (KPIs) displayed in Data Server Manager on the Subsystems Dashboard Track configuration changes Configure zParm Compare and clone configurations Manage application profile Manage alias Manage and control clients DB2 Utility Solution Pack V4.1 Customizable profiles for performing conditional object evaluations and generating actions mapped to resolving utilities (reorg, copy, runstats, etc) Ability to control prioritization of objects, evaluation conditions and generated resolving actions. Ability to define maintenance windows for enabling autonomics, allowing DB2 to self manage utility runs Graphical trend analysis of historical RTS Capture of utility history, recording utility output, time, duration, etc. IBM Big Data & Analytics © 2013 IBM Corporation
12
Where do I start ?...... Data Server Manager
Now lets take a look at the key features of DSM and how to get started . As well as the administration features , I will go over the basic features of administration then the features that come with the Utilities Solution pack
13
Simple 3-step setup http://ibm.biz/IWANTDSM IBM IOD 2012 11/15/2018
DSM offers a lightweight setup option for real-time monitoring and database administration. After downloading a single zip or tar file, depending on your operating system, the setup.bat in Windows, launches a 2-step wizard to (1) accept the license agreement ; and (2) specify the userid and password that will launch DSM, along with port numbers for this server. The Setup is complete page summarizes the 2 URLs that you use to launch the DSM web console. Drury Design Dynamics
15
Migrating workloads from QWT 4.1.x to DSM
16
Manage Databases Using the Database Explorer
Manage database objects To access the admin functions, navigate to Administer When you select an object type, such as a table, a grid of all the table objects is displayed. DSM allows you to filter this list so you do not have to keep scrolling. There is also a “search capability” to type in the object name. Once a table object is selected, you can perform certain task against the object, such as Alter, Drop, Generate DDL, etc. You can also view the properties of the object. Explore database object properties Explore the catalog
17
Develop and Run SQL Scripts
Manage scripts Explain SQL Tune SQL Validate SQL To access the SQL Editor, navigate to Administer > SQL Editor. Syntax Validation Syntax Assist to help build SQL Save/Open queries Explain query to see Visual Explain Tune a query or workload Customize and filter result output Save execution results
18
Create and schedule jobs
Schedule a Create Table job from Explore Databases Scheduled commands from Explore Databases Scheduling and automating database operations by using existing solutions such as cron, batch programming, or script automation programs can be a complex operation for many DBAs. Rather than creating and maintaining the scripts and schedules separately, it is easier and more reliable for a DBA to group these database operations into jobs, then manage and execute the jobs from a single tool. A job is a container that contains all the necessary information required to run a script or command against one or more databases according to a user-defined schedule. The job does not directly contain any information about which databases that the job will run against. Instead, you associate database information when you create one or more schedules for a job. This flexible approach allows you to run the same scripts or commands against different databases and schedule them for execution at different times. The Job Manager in the DSM provides the tools needed to create and schedule script-based jobs on your DB2 LUW and DB2 for z/OS databases. When you create a job, you can schedule it to run on one or more databases. A job can also be run manually from the Job Manager user interface. You can also configure multiple jobs to run sequentially by adding a chain of jobs to the job you are creating. Alternatively, create a script and schedule the job
19
DB2 Utilities Solution Pack 2.2
Symptoms/actions on subsystem and object dashboards “More integration, greater value” Optimize, control manage & automate Components: DB2 Automation Tool DB2 High Performance Unload for z/OS DB2 Sort for z/OS DB2 Utilities Enhancement Tool Autonomics support Data Server Manager Now lets go through some of the advanced administrative functions with the Utilities Solution Pack DSM continues to expand and provide a common web-based UI as seen with the integration with the Utilities Solution Pack. Features include: Customizable profiles for identifying object situations and generating resolving JCL (reorg, copy, runstats, etc.). Automatic prioritization of object situations and their resolving actions. The capability to define maintenance windows for enabling active autonomics, allowing DB2 to self-manage utility runs. Notification support ( , text, WTO) on selected autonomic events. Graphical trend analysis of historical RTS. Capture of utility history, recording utility output, time, duration, etc. DB2 Utilities Solution View upcoming autonomic maintenance windows with scheduled actions Automate Data Collection Utility History
20
Traditional Reactive Tuning
21
What to do next …. Performance Tuning Using Data Server Manager : Query Workload Tuner 5.1
We’ve reviewed some key administrative capabilities in DSM and trust me there is more to come Lets see how you can identify, diagnose and resolve performance problems IBM DB2 Query Workload Tuner for z/OS v5.1 is a new web-based tool that is integrated within IBM Data Server Manager we are packaging the expertise of professional query tuners into easy to consume offering. Optimizing performance is important to the business. Better performance means: Improved customer satisfaction. Application performance has a direct impact on customer satisfaction for both customer facing as well as employee facing applications Improved organizational productivity and employees can get more done when systems respond well IT can satisfy service level objectives or agreements Fit workloads into maintenance windows so there’s no delay for delivering critical reports and no impact to production workloads. Improving query performance increases overall system capacity. Not only do you improve query and application performance, but you free up resources for additional workload. Similarly, you can reduce chargebacks when you make your queries more efficient simplifies and speeds up analysis for DBAs, developers, designers and other roles in the organization. It not only removes much of the tedium from query analysis, but it provides expert recommendations to improve query and workload performance. Many shops don’t know what the best statistics are to collect, so they over-collect. help DBAs ferret out what is not needed as well to reduce wasted cycles on collecting statistics that don’t help performance. And it reduces costs as it reduces time to respond to new problems – it reduces the impact of sluggish response times and gets DBA and performance team members back to work on value creation activities far faster.
22
Why is workload tuning important
Why is workload tuning important? Workload: Multiple SQL statements defined by user The effort for tuning the whole application with good performance by evaluating every statement is overwhelming. Optimization decisions are based on trade offs: Statistics – CPU costs vs. query savings Indexing – query speed vs resource and transaction Sometimes performance improvement for one statement in an application may regress other statements in the application. When your application data grows, allows you to do proactive application health check periodically to find potential problems earlier before costly application outages Workload tuning speeds up analysis Analyzes multiple queries at once Workload tuning consolidates and optimizes recommendation for overall workload Statistics recommendations Index recommendation
23
Where is the most time spent?
Response time is the amount of time that DB2 requires to process an SQL statement. To correctly monitor response time, you must understand how it is reported. Some of the main measures relate to the flow of a transaction, including the relationship between user response time, DB2 accounting elapsed times and DB2 total transit time. Transaction response times. Class 1 elapsed time includes application and DB2 elapsed times. Class 2 elapsed is the elapsed time in DB2. Class 3 is elapsed wait time in DB2. When accounting classes 2 and 3 are turned on as well, IFCID 0003 contains additional information about DB2 CPU times and suspension times. DB2 accounting elapsed times DB2 accounting elapsed times are taken over the accounting interval. The start and end of the accounting interval differs based on factors such as the type of attachment facility, whether threads are reused, the value of the CMTSTAT subsystem parameter, and other factors DB2 class 2 not accounted time DB2 class 2 not accounted time represents time that DB2 cannot account for. It is DB2 accounting class 2 elapsed time that is not recorded as class 2 CPU time or class 3 suspensions. Not-in-DB2 time Not-in-DB2 time is the calculated difference between the class 1 and the class 2 elapsed time. This value is the amount of time spent outside of DB2, but within the DB2 accounting interval. A lengthy time can be caused by thread reuse, which can increase class 1 elapsed time, or a problem in the application program, CICS, IMS, or the overall system. The elapsed times are collected in the records from the accounting trace and can be found in OMEGAMON XE accounting reports. 23
24
Improve Performance and Reduce Costs
Improve end-user experience of performance Monitor KPIs that better reflect end-user experience i.e., transaction response time Get early warning of degrading performance before users are affected Isolate problems to correct area for fast response Get expert advice for improving query and workload performance Reduce costs Improve performance and govern system utilization to defer upgrades Save hours of staff time and stress Isolate problems to the right layer of the application stack, database component, even the line of code Enable developers and novice DBAs to tune like an expert Improving system level performance and response time means monitor KPIs that better reflect end user experience, i.e. transaction response time, so you get early warning of degrading performance as the end user sees it! When problems occur you can isolate problems quickly, before end users are impacted, and resolve them quickly so that application performance is improved. And you not only improve performance of a single query, but of entire query workloads balancing the cost of query improvement with the impact to transaction performance. At the same time you can reduce costs. Improving query performance typically reduces CPU consumption. Our support for workload management helps you align system resources according to business priorities and improve and govern system utilization to defer upgrades. You can save hours of staff time and stress by isolating problems to the right layer of the application stack, database component, even the line of code. By enabling developers and novice DBAs to tune like an expert, you get more value from your existing staff and move performance management in integral part of the development process avoiding costs of poor performance once applications are deployed. Within the Performance Management. OMEGAMON simplifies and supports system and application performance monitoring, reporting, trend analysis, charge back usage, and buffer pool analysis. If problems are encountered, you are notified and advised how to continue. Key Performance Indicators (KPIs) are displayed in Data Server Manager on the Subsystems Dashboard. Enable OMPE and get the full Autonomics Capability: With 24x7 access to these KPIs, you no longer have to rely on manual processes to collect diagnostic information. Accelerate analysis and reduce downtime for urgent situations .
25
Where is the time spent within DB2?
If someone has a performance question, one of the best first responses is "What traces are running?" After all, how can you determine if something is not doing what you expect (within the parameters you expect), when you have no idea what it's doing. It can be a real problem. You don't always have the luxury of running something a second time and getting the same performance results, just so you can turn on the appropriate traces. So, do you run all of the traces all of of time? Well, maybe not. “At some point the "checkup" may start to cause more damage than the disease you're trying to cure.” Willie Favero What about not using Class 2? Well, that really won't work. You see class 1 is your transaction time while class 2 is the portion of the class 1 time spent in DB2, your SQL time. Class 3 is how much of the class 2 time was spent waiting. Classes 7 and 8 pertain to packages. Class 7 is your in Db2 time and class 8 is you wait times. So you see, you need them all if you are trying to shot an application performance problem …. Be cautious when specifying traces because it can be expensive. 25
26
What is your query tuning objective ……..
reduced CPU usage or reduced elapsed time ?
27
Common Scenarios & Collection
Most expensive SQL statement in your DB2 subsystem Most expensive SQL within a PLAN All of the PLANS where a specific package is used All of the “exceptional SQL for a given plan” All of the objects accessed by a specific package All of the SQL which access a specific object Unnecessary negative SQLCODES
28
Determine what data needs to be collected
1 Data available to be collected or viewed SQL metrics DB2 object access SQL text and host variables DB2 commands Negative SQLCODES Expanded and grouped Information about exceptions Buffer Pool Statistics Delays Three types of data Summary – data summarized for each unique SQL statement executed in a particular interval of time Plan + Program + Section + Statement # + Statement type SQL Codes are not collected by default Exceptions – individual SQL calls that have exceeded user defined thresholds Alerts – events that require immediate attention; can be classified as exceptions A monitoring profile enables you to tailor how DB2 Query Monitor monitors specific SQL workloads. Monitoring profiles control summary reporting, negative SQLCODE reporting, exception limits, alert notification thresholds, host variable reporting, and OPTKEYS override settings.
29
Identify workload for proactive tuning
2 Identify workload for proactive tuning Identify the topN (50 , 200 , 250 …….) most expensive queries Drill down into results Save workload and/or start tuning TopN filtering enables you to specify if you would like to view only the top NSQL Statement summary records, sorted by a column of your choosing, either in ascending or descending order. Often you may be looking for the worst SQL Statement and want to find this statement using the smallest possible amount of system resources. You can use top-n filters to limit the number of rows that are returned by the when viewing SQL text. In this example Let’s Sort the columns in the data table to focus on the SQL with the highest execution count and CPU You can perform a sort in either ascending or descending order based on a single column of data in the data table. In this example we will perform multi-column sorts. By using top-n filters when viewing SQL text to limit the number of rows that are returned by the CAE Server, you can improve performance and reduce the impact your activity browsing has on the CAE Server. Top-n filtering enables you to locate the most problematic SQL statement and using the smallest possible amount of system resources. 29
30
Drill down into results
2 Tune or Tune all Now let’s drill down into the results Also notice the Delay time, this could be caused for a number of reasons. For example, poor buffer pool hit ratios might cause SQL delay times to be high and SQL elapsed times to be high The Details panel can include one or more of these tabs: Analysis This tab contains expandable lists of components that can contribute to the elapsed time (CPU, Delays, Unaccounted Time, and Specific Elapsed). These lists can be expanded to show more detailed statistics. All the sub-lists within a parent list are sorted in descending order of duration. Any sub-list whose contribution to the parent item is more than 50 percent will have its statistics in bold, and is automatically expanded. Buffer Pool A list of buffer pool columns, totals, and averages. This tab is equivalent to the ISPF line command B in Activity Summaries, Current Activity and Exceptions. This tab has no chart. Delays A list of delay events, event counts, and delay times. This tab is equivalent to the ISPF line command D in Activity Summaries, Current Activity and Exceptions. General A list of the columns and values in the active data table. HostVars A list of host variables. This tab is equivalent to the ISPF line command H in Current Activity and Exceptions. Locks A list of lock events and counts. This tab is equivalent to the ISPF line command L in Activity Summaries, Current Activity and Exceptions. Misc A list of miscellaneous statistics columns and values. This tab is equivalent to the ISPF line command Q in Activity Summaries, SQL Text A display of the SQL text for the selected activity. This tab provides you with the option to show raw SQL text or to tune the SQL statement. SQLCA A display of the SQL communication area information for a selected SQL statement. This consists of SQL communication area text describing the execution of the SQL statement, as well as properties with their corresponding values. This is equivalent to line command C in the SQL Code detail display panel.
31
Capture from Statement Cache & customize collection
Accelerate analysis, reduce downtime
32
Execute Advisors Statistics Index Analytics Accelerator 3
Get recommendations on the best statistics to capture to influence access path selection Index Get recommendations on indexes changes that can reduce database scans Analytics Accelerator Get recommendations on optimizing and managing accelerated analytic queries and applications Once you’ve defined your workload to work with, you can run one or more advisors. The statistics advisor gives you recommendations on the best statistics to capture to influence access path selection Missing statistics, outdated and conflicting The index advisor gives you recommendations on additional indexes that can reduce database scans Add, drop, modify and now impact Analytics Accelerator Advisor analyzes a workload and recommends tables to be offloaded to IDAA and tables to retain in DB2. Typically, we suggest running the advisors this order: 1. Statistics Advisor 2. Index Advisor 3. Analytics Accelerator Advisor Accurate and complete stats will help drive good query advice, and of course Index Advisor should not be executed without good statistics. The statistics and index advisors take the entire workload into consideration.
33
Improve statistics quality and collection
Conflicting statistics explanation Results Accurate estimated costs Better query performance Less CPU consumption Improved maintenance window throughput Generates RUNSTATS control statements DB2 uses a cost-based optimizer. Database statistics are the facts on which the optimizer makes decisions about access plans. The statistics typically include information about the number of rows in a table and the number of distinct values, the most frequent values, and the distribution of values in a column. The optimizer uses these statistics to compute the cost for each step in an access plan. Consequently, if the statistics are inaccurate, outdated, or conflicting, then the optimizer will create inaccurate estimates for the costs of the steps in a query plan leading to poor performance. RUNSTATS TABLE ALL INDEX ALL collects unnecessary statistics and misses key statistics: multicolumn and histogram stats are not collected. There are often statistical correlation between columns. For example, there is a strong correlation between "MODEL" and "MAKE" columns. Likewise, there is a strong correlation between "CITY" and "STATE" columns. Collecting statistics of individual columns [the defaults] may not be enough to provide the information required, so we need to collect column group statistics. Query Workload Tuner can recommend what kind of statistics to collect based on the query and the workload. Interestingly, about 80% the access path issues reported to IBM DB2 Support are the result of incorrect statistics! So the statistics advisor analyses which objects are most interesting for the query, workload, or database level and provides advice on Missing statistics –optimizer assumes the default values to determine costs which could be completely inaccurate Conflicting statistics – the optimizer may find that there has been an inconsistent statistics collection strategy which can result in conflicting statistics, for example, table and index statistics may be collected at different intervals. Obsolete statistics – the statistics could be very old and no longer represent the current state of the table. As you drill into the advice for a specific recommendation, you’ll see that the statistics advisor generates a RUNSTATS You can save them away for later execution or run them directly if you have the appropriate authority You also see the explanation behind the recommendation. instances of conflicting or missing statistics and well as providing the explanation about why the advisor thinks they are conflicting. In this case the column group statistics are conflicting with others. Improving statistics quality means that you are giving the optimizer accurate data on which to base decisions about optimizing the access plan. Thus you can improve query performance and reduce CPU consumption each time the query is run. But in addition, knowing the right statistics to collect means you can stop collecting irrelevant statistics. Many organizations don’t really know what they should be collecting, so they just collect everything. That has the dual problem of wasted CPU for collecting irrelevant statistics and also increasing workload inside limited maintenance windows. So the Statistics Advisor increases both statistics quality as well as statistics collection efficiency. “80 % of access path PMRs could be resolved by statistics advisor before calling IBM support.” – IBM Support
34
Indexing advice to improve database design
Workload Index Impact Analysis Indexes are decided at design stage Lot of effort is spent making SQL to use the provided indexes But what if the SQL is "right" and it's the indexes that are "wrong“ Cost resources to maintain How do you simply test your hypotheses without impacting production? Removing obsolete indexes simplify use Consolidate indexes and provide a single recommendation Enables what-if analysis Provides DDL to create indexes Run immediately or save Test before deployment Use virtual index capabilities built into the DB2 engine DSM/ Query Workload Tuner also provides index advice. This is most valuable when used in a workload context. It analyzes the workload and recommends additional indexes that would benefit the entire workload. It includes a consolidated set of prioritized recommendations. The Query Index Advisor might recommend indexes for the following reasons: Foreign keys that do not have indexes defined. Indexes that will provide index filtering and/or screening for the SQL statement. Indexes that will provide index-only access for the SQL statement. Indexes that can help to avoid sorts The recommendations are easy to use. If you analyze index recommendations query by query you would have to analyze and consolidate all the recommendations, including filtering out indexes that have less value to the overall workload and balancing DASD requirements with query performance improvements. The index advisor does all the analysis and consolidation for you providing a single recommendation. The What-if analysis allows you to apply various constraints to the Index Advisor and see how those constraints modify the recommendations. This function applies to the entire set of index recommendations, not to a single index. So you can limit the amount of DASD that can be used of the number of indexes on a table. It also creates the DDL required to create the indexes, and gives you the ability to run them right away (assuming you have proper authority) or save them for later review and execution. You can also customize recommendations and the DDL creation. For example you can specify a maximum for the number of columns that can be part of an index key and change the Creator ID that is used when index DDL is generated. You can also specify the FREEPAGE, PCTFREE, and CLUSTERRATIO values to be assumed for new indexes, and whether or not index leaf page size can exceed 4 KB. For the workload index advisor, you can further customize analysis and recommendations based on how to weigh performance factors of the workload queries, the types of index optimizations to consider, and policies regarding the overall objective of the indexes, for example performance only or performance/cost ratio. uses virtual indexes under the covers to assess the query improvement. So you know that the recommendations are good, because the DB2 optimizer has, in essence, already used them in estimating the query costs. In the Existing Indexes section, you can find out whether the DB2 optimizer is using existing indexes and whether the optimizer would continue to use existing indexes after you followed the recommendations from the advisor. This information appears in this section. Index The name of the index. Table The name of the corresponding table. Creator The qualifier of the index. Index Columns The key columns in the existing index. Include Columns The columns that are appended to the key columns and that can allow queries to use index-only access when accessing data. These columns are not used to enforce uniqueness, but they can be appended only to unique indexes. Include columns are distinct from key columns. Used After Indicates whether the index would be used if the recommended indexes were created. Used Before Indicates whether the index is used in the current access plans for the statements that reference the corresponding table. Foreign Key Index Indicates whether or not the index is on a foreign key in the corresponding table. Index impact analysis helps you evaluate how the recommended indexes impact the entire system, including the existing packages and dynamic SQL statements. You can also choose a specific workload to evaluate the impact on an application. Several sources for capturing SQL statements from a local file, dynamic statement cache, plan/package, or user-defined repository. These sources are required by internal DB2 users (such as monitoring tools) and external DB2 users (such as file system.) .
35
Optimizing the selection and tuning of accelerated workloads
Workload Analytics Accelerator Advisor Identify candidate queries and tables to be routed to the Accelerator Identify candidate tables to be routed to the accelerator Implement advisor-based tuning recommendations for mixed workloads of accelerated and un-accelerated queries Diagram accelerated queries in Access Plan Graphs Integrates with Query Monitor and OMPE for capturing query workloads for complete analysis Enable “what if” analysis Benefits Shorten the process of selecting tables to be accelerated Visualize access paths of accelerated queries Increase productivity by working with accelerated queries through a unified interface Increase overall system capacity If you wanted to evaluate a large number of queries, analyzing the EXPLAIN information for each query would require a lot of time and effort. The new Workload Analytics Accelerator Advisor (WAAA) in Optim Query Workload Tuner can evaluate thousands of queries quickly. Optim Query Workload Tuner can be used to capture a workload from various sources, run the accelerator advisor for analysis, and then receive its recommendations. The accelerator advisor provides the list of tables to be added or deleted from the accelerator and the SQL statements that are eligible for offload along with the estimated cost savings from query acceleration It also allows you to do what- if analysis Benefits include: Shorten the process of selecting tables to be accelerated, visualize access paths of accelerated queries, Increase productivity by working with accelerated queries through a unified interface and Increase overall system capacity Customers testimonials "While there will always be benefit in analyzing single statements and making incremental performance improvements, the ability to analyze entire workloads and make performance improvements allows a DBA to implement transformational change." Next you drill down to the advisor's recommendations The DBA opens the Analytics Acceleration section in the Review Recommendations. The estimated total cost savings from query acceleration is calculated by determining how many seconds would be saved by offloading all of the eligible statements to an accelerator. The cumulative CPU time savings from query offloading is the total amount of seconds saved when running eligible queries with Analytics Accelerator instead of DB2. The cost is shifted to Analytics Accelerator instead of DB2, saving the MIPS costs The columns include On Accelerator - specifies whether a table is already on an accelerator. If you want to determine what accelerators are hosting the selected table, right click the table and select List Accelerators Hosting the Selected Table. References to Table - indicates how many times the table is referenced by eligible statements. Cumulative Estimated Cost - specifies the sum of all eligible statements that reference the selected table. These values help the DBA understand the weight of the table in the workload. The larger the numbers are in these columns, the more the DBA should consider adding the table to an accelerator. By default, the list shows all of the tables that the advisor recommends to offload to Analytics Accelerator to maximize the benefit. The DBA can filter to see tables that are not recommended to be offloaded and tables that are recommended to be removed from the accelerator, as shown in Granular level to show statements related to the table
36
Prevent problems before they impact the business
4 Optimize beyond the prior level of service Determine whether the later version of the collection has degraded performance. Determine whether any packages have errors. Identify which packages have SQL statements that have degraded performance Available Actions Apply filters and review Comparison result Review comparison result Generate HTML comparison report or in csv Generate new query workload for tuning & perform analysis Enhancements Compare two different workloads Workload Comparison . When viewing summaries, you can specify one workload as a baseline workload and a second workload as comparison workload to view differences in metrics for the two workloads. Comparing workloads allows you to: -Identify and analyze the differences in the performance metrics for the workloads you select. If the performance of a workload has degraded, you can compare workloads to find the plans, programs, or SQL statements that are contributing most to the decline in performance. - Measure the effect of any changes to a workload, such as tuning efforts, upgrading DB2 to a new version, or offloading some queries to an accelerator. Workload Access Plan Comparison
37
Derived Worksheet with V11V12 comparison
Prevent problems before they impact the business migration comparison 4 . • OMPE create spreadsheet data (CSV format) - OMPE DB2 ESP Support libraries - Procedure: Create SAVE_FILE data from DB2 traces (statistics, accounting, etc.) Execute the CSV generation program - Special JCL/program to create ESP metrics in CSV format from DB2 traces will be provided to aid in V11-V12 comparison analysis - Spreadsheet template provided to load CSV data Derived Worksheet with V11V12 comparison Import generated CSV data from V11 and V12 execution into the provided several worksheets
38
Host variable Collection & Selectivity Override
IBM Solution Exclusive! Why did the DB2 Optimizer choose that path? Helps users improve query access plans for dynamic queries with parameter markers The selectivity override feature utilizes parameter marker information Users can deploy a selectivity profile generated by this function to create better access plans. SELECT * FROM EMPLOYEES WHERE SALARY BETWEEN ? AND ? Set the stage : What’s the problem Optimizer doesn’t make good decisions DB2 Optimizer is a cost-based Optimizer. So, if Optimizer chooses a less than desirable path, what doesn’t the Optimizer know? Example of the query When you use this feature, DB2 Query Monitor collects a fixed-size representative sample of all host variable values seen during a sampling interval. This allows you to find the most commonly used host variable values and better understand your application's performance. You can also use host variable information to tune your SQL queries. In the case of stripped SQL text, the staging table contains the first unstripped SQL text, even though sampling is performed on all unstripped SQL texts. To learn more about the domain knowledge of overriding predicate selectivity settings at the statement level, refer to: NOTE: Prerequisites in next page.
39
Create a Baseline Run a test application now to get a baseline.
1 Create a Baseline Run a test application now to get a baseline. Average execution time for this application is: 125ms Note this query is well tune before selectivity override analysis Remember this number
40
Collect Activity Collection Period
2 Collection Period Choose the Activity Browser data from a past time period Summaries Access and refine view of your system's query activity Collect Host Variables Configure how the host variable information is captured Request host variable collection When you use this feature, DB2 Query Monitor collects a fixed-size representative sample of all host variable values seen during a sampling interval. This allows you to find the most commonly used host variable values and better understand your application's performance. You can also use host variable information to tune your SQL queries. In the case of stripped SQL text, the staging table contains the first unstripped SQL text, even though sampling is performed on all unstripped SQL Collect HostVars (popup/dialog) The Collect HostVars popup enables you to configure how host variable information is captured. The Collect HostVars popup is only accessible if you are in Perspective: Summaries > SQL text and a DB2 subsystem (V11) is in the selected drilldown path. Staging Tables Connection From the Staging Tables Connection list, select the staging table connection you want to use to store collected host variable information. Click Create New to create a new staging table connection, if the list does not contain a staging table connection you want to use.
41
Analyze Hostvars details and identify candidate query
3 Tune or Tune all For selectivity override analysis With parameter marker, high elapse time, CPU time, execution count etc. Example “?” parameter marker – not all
42
Launch DSM from the Query Monitor – Security settings are the same as in DB2 does not exceed your authority levels.
43
Tune selectivity override
4 Selectivity Override Job Go to View Workload Statements, you can see the query is Selectivity Override Candidate Then, select Host Variables In DSM, the selectivity override tuning job is in Tuning Jobs page Select the job, click View Results
44
Review analysis In this dialog, you can see:
parameter markers distribution Weight of each parameter marker value set Select the sets (all) for Selectivity Override analysis Click Selectivity Override 18% of the time -> Gender code is 1 By default we can handle top 20 sets
45
View results and deploy the selectivity profile
5 View results and deploy the selectivity profile A selectivity override analysis job is created Click View Results when it is completed Run recommended scripts Flush the statement cache Selectivity profile – in the db2 catalog. The cost of the better plan higher than the cost of the worse plan The optimizer didn’t have the right information to make a decision. .
46
Compare against baseline
6 Run the test application again run after Selectivity Override analysis Average execution time for this application is: 92ms Improvement of 26% On an already well-tuned query!! 26% faster!!
47
Move from reactive to proactive to predictive results
DB2 performance depends on “health” of DB2 objects Often, a REORG is done on DB2 application objects whether needed or not, affecting resources and availability The best REORG is the one you don’t perform Consider the following: Granular query monitoring and analysis at object level: Ability to detect, apply intelligence and decide best action for object: Perform REORG if performance will improve Skip REORG if determined no performance benefit Aligning data management with application needs Hands-free performance monitoring tied into maintenance actions Improves application performance, reduce system and IT resources DB2 application REORG Analyze and Act No REORG Lets move from reactive to proactive to predictive ……Reorg Avoidance Performance depends on access path and may be severely affected if the need for REORG is ignored. However, in some cases, REORG does not improve the access performance and there is no point in REORG unless there is a space issue. To avoid unnecessary REORG, it is important to monitor the access performance and identify if there is any increase in the number of getpages due to relocated rows or the data being unclustered, that is usually associated with performance degradation. The idea is to REORG objects only when there is an impact on performance. - Query monitor within the Performance Solution Pack collects data for each query down to the object level, even when multiple objects are accessed in the query. It is being enhanced to detect if performance has degraded since the last reorg was run and make a recommendation back to autonomics director on whether or not to reorg the object. The flow is Autonomics will pass objects to QM that look like they need a reorg due to real time stats, but query monitor will come back and indicate whether performance is degrading for that object or not. If performance is not degrading (due to increased getpages or elapsed time), the reorg recommendation will not be made. In addition, the analysis can be used to see if REORG indeed improved access performance and was justified. - Avoiding unnecessary REORG allows minimizing the cost of reorg and focus on critical objects that can benefit from REORG. Automation tools allows you to trigger REORG when there is a performance decrease and save the resources used by useless REORGs.
48
IBM delivers complete DB2 performance management
Reduce costs of DB2 for z/OS and applications Improve performance of all package applications Tune performance of query warehouse Identify and solve faster closing the loop on problem resolution Replace ad-hoc methods with integrated solutions for scalable, robust approach to performance management Improve performance and time to resolution by up to 50% Speed DB2 and application migration with comprehensive comparison capabilities .
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.