Download presentation
Presentation is loading. Please wait.
Published byLucinda O’Brien’ Modified over 7 years ago
1
DB2 12 is HERE! The No1 Enterprise Database gets even Better!
Date 4th Oct 2016 DB2 12 is HERE! The No1 Enterprise Database gets even Better!
2
Agenda Kick Off & Introduction - Tom Ramey IBM DB2 for z/OS Director
DB2 12 Latest News & Update Jeff Josten IBM Distinguished Engineer DB2 Tools & Utilities Haakon Roberts IBM Distinguished Engineer DB2 12 Business Value Independent View - Julian Stuhler Gold Consultant – Triton Consulting Live Q&A Session Panel of Speakers
3
The #1 Enterprise Data Server
DB2 12 GA 21stth Oct 2016! Announcing DB2 12 The #1 Enterprise Data Server Improved business insight Highly concurrent queries run up to 100x faster Faster mobile support 6 million transactions per minute via RESTful API Enterprise scalability, reliability and availability for IoT apps 11.7 million inserts per second, 256 trillion rows per table Reduced cost 23% lower CPU cost through advanced in-memory techniques
4
What are our customers saying about DB2 12 ?
DB2 12 rules the API Economy The RESTful API is yet another way where DB2 is at the leading edge – and again cementing DB2’s and the mainframes position as a full capable server in the IT infrastructure of today. Using these REST-services Mobile applications can both be built faster and run faster ! Frank Petersen Chief Architect BankData DB2 12 Cost Savings The biggest benefit with DB2 12 comes with Index in memory optimization (Fast Traverse Block) which provides incredible costs savings with lower CPU consumption for OLTP – nearly 9-10% after Rebind. This should bring down our mainframe operating costs. Jacek Surma DII,Zespół Systemów Mainframe DB2 12 – Exciting new capabilities We love the "agile partition technology" that DB2 12 offers. This feature makes it easier for ITERGO to address "hot spots" where "new data" is inserted. This is particularly important when enterprises are looking for scale, speed and reduced costs . Walter Janissen Chief Architect ITERGO DB2 12 Availability & Security We are very pleased with many of the new DB2 12 features, especially with Transfer Ownership and Pending Alter Column feature this give our Enterprise higher availability and security which are “critical” in the banking industry. Jacek Surma DII,Zespół Systemów Mainframe DB2 12 –Offering Advanced “in-memory” technology We are looking forward to exploiting the advanced "in-memory" technology that DB2 12 offers (Index Fast Traverse Block) this gives us an opportunity to reduce CPU resource consumption and performance cost by using more real memory. It is very cost effective trade off for enterprises like us that run DB2 12 on z13 machines. During testing we have seen up to 23% CPU reductions in "specific" test cases. Henrik Henriksen DB2 DBA Danske Bank DB The #1 Enterprise Server for Mission Critical Data ! We are really excited about Performance Enhancements in DB2 12 especially advance "in-memory" (Fast Traversal Blocks FTB) capabilities. During testing we have seen up to 5 % CPU reduction and this clearly relates to enormous potential cost savings and positions DB212 as a leader in Enterprise Database market. İbrahim Parlak IT Manager Garanti Bank
5
DB2 12 is HERE! Announced 4th October 2016
Jeff Josten IBM Distinguished Engineer DB2 for z/OS Development IBM Silicon Valley Lab
6
IBM IOD 2012 Drury Design Dynamics Please Note
09/13/14 IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multiprogramming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here. 6 Drury Design Dynamics
7
Agenda Quick overview of some major DB2 12 features
Small subset, many are not covered here Utilities features covered by Haakon in the next section More detail will be forthcoming at GA
8
DB2 for z/OS Timeline 2016 2013 2010 2007 2004 DB2 12 DB2 11
V10 EOS 9/2017 DB2 10 2004 V9 EOS 4/2014 V9 V8 DB2 12 was years in the making DB2 12 ESP (“beta”) started in March, 2016 Version GA V7 3/2001 V8 3/2004 V9 3/2007 V10 10/2010 V11 10/2013 V12 ESP announce: End of Service for DB2 10 Announcement was made today. Announce: August 5, 2014 Effective: Sept 30, 2017 Earlier this year we announced the Withdrawal from Marketing for DB2 10: Announce: June 2014 Effective July 6, 2015
9
DB2 12: Open for Data Redefining enterprise IT for digital business and the mobile app economy Scale and speed for the next era of mobile applications Over 11 Million Inserts per second measured 280 trillion rows in a single table, with more flexible schemas In Memory database 23% CPU reduction for lookups with advanced in-memory techniques Next Gen application support Up to 384 million transactions per hour through RESTful web API Deliver analytical insights faster Up to 100x speedup for query workloads These are high level claims and there are more details in further slides. 23B text messages sent each day in 2015 (8.3T per year, or 16M a minute), so 280T rows is enough to contain about 30 years of all text messages sent on the planet. Final Insert rate was 11.7 million (11,797,700) per second : This was done through 12 way data sharing members against one partition table, without index, all objects were GBP dependent. To be more realistic, we set the harder task on inserting against the table with an index. After another set of tuning & trials, we have recorded Insert rate of 5.3 million (5,364,300) per second : This was done through 12 way data sharing members against one table with one index, all objects were GBP dependent. Inserting from 1200 zLinux clients against one table. Old: Log rate from 12 members is the most amazing 1,366 MB per second from 12 members. After adding more CPs to relieve host CPU starvation, the latest bottleneck is in CF link. With LOCK structure req rate 160K per sec, GBP 325K per sec, our 9 CP CF reached 92% utilization due to async conversion. Since we have already used up maximum numbers of ICP links that z13 can support, we will be investigating what we can do to get more connections to the CF without impacting other works such as Async duplexing evaluation. The ability to ingest tens of thousands and sometimes hundreds of thousands of rows each second is a critical application requirement in the era of mobile devices and Internet Of Things. Here are some examples of these kinds of applications: Tracking clicks on a website Capturing Call Data Records (CDR’s) in a mobile network Tracking events generated by “smart meters” Capturing data from thousands or tens of thousands of mobile application users Many consider it essential to use a NoSQL database for high data ingestion rates. This is simply not true! Especially with the advent of DB2 12! DB2 12 allows for super high insert rates without having to partition or "shard" the database all while being able to query the data using standard SQL, with full ACID compliance! z/OS Connect 384M tx/hr works out to 106.8k/s sustained, measured April, 2016 utilizing the latest available version of DB2 adapter for z/OS connect. Achieved 106.8K tx/s with 8 x 30 users using 8 db2z Adaptor servers on 8 way data sharing. . This is very similar result from last September: 6 way data sharing on 2 z/OS LPARs, 6 WLP servers, clients on 2 zLinux. Simple look up statement issued from total 360 clients concurrently. DB2 class 1 response time was average 2-4 millisecond range. Allocate a lot of zIIP processors, as the majority of the cost is in WLP servers and zIIP eligible. IBM Confidential
10
In-Memory Index Optimization
A new Index Fast Traverse Block (FTB) is introduced Memory optimized structure for fast index lookups Resides in memory areas outside of the buffer pool New zparm INDEX_MEMORY_CONTROL Default=AUTO (min. of 500 MB or 20% of allocated BP storage) UNIQUE indexes only, key size 64 bytes or less DB2 automatically determines which indexes would benefit from FTB DISPLAY STATS command shows which indexes are using FTBs New SYSINDEXCONTROL catalog table Specify time windows to control use of FTBs for an index New IFCIDs 389 and 477 to track FTB usage S19906 FTB: storage pool has upper limit of 200 GB. IFCID 477 Written when allocating or deallocating an FTB IFCID 389 to track all FTBs on the system, written at 2 min. intervals In-memory index for fast traversal – new fast index traversal block (FTB’s) associated with indexes. We would like to make this an autonomic feature, but probably we will provide some controls as well. The idea is that DB2 detects which indexes are frequently used for traversals, key lookups, and once we hit some kind of threshold we will build an FTB for this index (a storage area in the side of the buffer pool). We will then cache the top levels (2 or 3 probably) of the index, in the side of the buffer pool, and we can then do very fast traversals through these levels of the index, without having to do page oriented traversal like we do today in the bufferpool for index B-Trees. We expect a significant performance improvement, for indexes where you have frequent look-ups. Not attractive for indexes that are volatile in size, due to inserts / deletes for example. But when the index is used predominantly for read access then this can be a significant performance improvement, when traversing down the first few levels of the index tree. Zparm INDEX_MEMORY_CONTROL AUTO (default) We will use larger of 10MB or 20% of buffer pool size DISABLE no FTB (no additional storage allocated) 10MB -200GB user specifies the upper limit regardless buffer pool size
11
Simple Look-up : Faster & Cheaper
Up to 23% CPU reduction for index look up using DB2 12 In-memory index tree This is showing the benefits of our index fast traverse block (FTB). FTB are in-memory optimized index structures where we can traverse the FTB structure much faster than doing page oriented page traversal for indexes that are cached in buffer pools. If a table has 30 billion rows for a sales transaction, and you’re querying this table and you want to do lots of queries for product look ups, today that index for the product is a page oriented structure, as you traverse the index tree from the root to the pages. It’s always page oriented today for index access but we’re taking DB2 in v12 from being a page oriented disk space DBMS by taking the page based index and converting it into a memory optimized index structure.
12
Hear What the customers are saying about DB2 12 “in-memory” technology !
DB The #1 Enterprise Server offering “in-memory” technology “In the past few releases of DB2 for z/OS, IBM has systematically removed the most significant virtual storage limits DB2, so that customers can more fully exploit the real storage available in their System z servers. The “in-memory” features of DB2 12 take this a step further, allowing customers to really take advantage of today’s larger memory configurations in order to significantly reduce operational costs and improve application performance. There’s more still to do, but DB2 is now a bona fide in-memory database.” Julian Stuhler DB2 Specialist, IBM Gold Consultant, IBM Champion for Analytics DB The #1 Enterprise Server for Mission Critical Data ! “We are really excited about Performance Enhancements in DB2 12 especially advance "in-memory" (Fast Traversal Blocks FTB) capabilities. During testing we have seen at least 5 % CPU reduction and this clearly relates to enormous potential cost savings and positions DB2 12 as a leader in Enterprise Database market" İbrahim Parlak IT Manager Garanti Bank
13
INSERT Performance Insert workloads are amongst the most prevalent and performance critical DB2 12 delivers significant improvements for Non-clustered insert: journal table pattern UTS, MEMBER CLUSTER Advanced new insert algorithm to streamline space search Default is to use the new fast algorithm for qualifying table spaces INSERT ALGORITHM zparm can change the default INSERT ALGORITHM table space attribute can override zparm N17835/S19513 Restructure insert modules, N17835/S17836 DM Insert improvement for non-clustering TS Refer to speaker notes in previous slide. (slide 13)
14
DB2 12 11.7 Million Inserts Per Second
- Scalability without compromise Simulated stock exchange transactions Utilizing DB2 12 features : New insert algorithm and scalability enhancements All done with single z13 box 12 way DB2 data sharing with 4 way sysplex with high availability DB2 12 for z/OS, z/OS 2.2, z13 and DS8870 Simulating the actual customer’s table schema for stock exchange transaction 1200 clients are triggered from 2 zLinux using sysplex workload balancing against 12 DB2 members 4 way sysplex environment With logging YES from 12 members, total 2000MB log per second!
15
Significant CPU Reduction in DB2 Query Workloads
UNION ALL w/View Complex Outer Join, UDF Complex reporting, large sort Simple query or large data scan DB2 development accumulates interesting customer workloads, and important target workloads to stress test the performance enhancements in each release. This slide shows the variability of DB2 12 performance results. These results are isolated measurements used to reliability test each DB2 enhancement and may not be representative of actual performance results achieved by customers due to variability of activity in a customer environment. These workloads however do not incorporate all of the DB2 12 performance enhancements. The main target are newer workloads – such as SAP Fiori or Websphere Portal – each achieving up to 80% performance improvement in isolated test results. These workloads contain outer joins and UNION ALL which have been optimized in DB2 12. There is also CUST1 workload which DB2 maintains in a clustered and also a poorly clustered state. Due to enhancement of access path choices on poorly clustered data, the unclustered version has improved 65% - which may result in better performance in between REORGs, or if an inappropriate index was chosen as clustering. Two workloads – SAP BW and BIDAY-Long (for long running queries) achieved minimal improvement. The majority of time is spent scanning the data in these workloads – which is a perfect target for IDAA, and thus not the primary focus of the optimizer in DB2 12. Other workloads – which represent various traditional workloads – as seen in batch or simple reporting – achieved 5-25% performance improvement. These results may continue to evolve since DB2 12 development is still in progress at the time this data was reported. CPU Reduction %
16
DB2 12: Simplicity and RAS Dynamic SQL Plan Stability
Stabilize performance of repeating dynamic SQL statements RUNSTATS automation Optimizer automatically update profile with RUNSTATS recommendations RLF control for static packages LOB compression Using zEDC hardware DRDA Fast Load Callable command for fast load of data into DB2 directly from files on distributed client
17
Dynamic SQL Plan Stability
Problem: Unstable performance of repeating dynamic SQL statements Environmental changes can result in change in access path or performance regression, and this can be tough to manage RUNSTATS applying sw maintenance DB2 release migration zparm changes schema changes Static SQL has several advantages Access path established at BIND time Static plan management gives advanced management functions Objective: extend static SQL advantages to dynamic SQL Applications that use repeating dynamic SQL including but not limited to SAP, Peoplesoft, and Websphere often suffer from query performance instability when compared to static SQL. While the risk of any individual query regressing on any given day is small, exposing thousands of queries to access path changes every day as statistics, the environment, maintenance level, even release of DB2 changes - resulting in exposure to query performance regression. For static SQL, the risk of unexpected regression is substantially reduced because: New access path introduction is only at BIND/REBIND time Not impacted by RUNSTATS/maintenance/release until REBIND Static SQL is also able to recover from query performance regression PLANMGMT(BASIC/EXTENDED) to preserve package structures in PREVIOUS/ORIGINAL slots REBIND SWITCH(ORIGINAL/PREVIOUS) to restore prior access path / runtime structures REBIND APREUSE to generate new runtime structures, reuse existing access path The goal is to extend the benefits of static SQL to dynamic – and DB2 12 takes the first step in providing the ability to stabilize the SQL. The plan management features that are available for static SQL are not available for dynamic SQL however.
18
Dynamic Plan Stability
DB2 12 plan – base infrastructure Opaque parameter CACHEDYN_STABILIZATION Capture Command with / without monitoring Global variable FREE EXPLAIN (current, invalid) Invalidation LASTUSED (identify stale statements) Instrumentation (query hash, explain, cache + catalog hit ratio) APPLCOMPAT is part of matching criteria Key DB2 12 limitations Temporal stabilization not currently included REBIND support not included No PLANMGMT/SWITCH/APREUSE DB2 12 provides the infrastructure to save, reuse, monitor, and manage runtime structures for dynamic SQL. • Provide the infrastructure to stabilize runtime structures, explain information, and dependencies for dynamic SQL into new catalog tables. • Provide a command that will qualify statements in the statement cache using CURRENT SQLID as a scope, and number of executions as a threshold. Qualified statements would be stabilized into new catalog tables. • Provide the infrastructure to locate, load stabilized statements to reuse the runtime structures on subsequent executions. • Provide the instrumentation to know which statements are using which stabilized runtime structures. • Provide the ability to FREE stabilized runtime structures. • Provide the ability to capture statements via a persistent monitor that will continue to stabilize dynamic SQL over time as they qualify for current sqlid scope and exceed the desired execution threshold. • Externalize the statement hash so a customer can use the hash as a stable identifier to monitor statements with the same hash over time, even if they move in and out of the statement cache and/or the catalog as stabilized dynamic SQL. • Invalidate entries for stabilized dynamic SQL when DDL or authorization changes occur. • Support tracking of date LASTUSED for stabilized dynamic SQL. • Provide the ability to externalize explain records for stabilized dynamic SQL. • Provide the ability to set a global variable to drive stabilization of dynamic SQL for specific applications. A zparm CACHEDYN_STABILIZATION is provided for serviceability. Concentrate statements with literals, and bitemporal are not supported for stabilization in DB2 12. None of the plan management features such as REBIND SWITCH, or APREUSE/APCOMPARE are available to stabilized dynamic SQL in DB2 12.
19
Hear What the customers are saying about Dynamic Plan Stability !
DB2 12 – More Control over Dynamic Plan Stability “Dynamic Plan stability give us much better options to control and manage dynamic SQL. It is very important for us, as dynamic SQL becomes more and more popular. Dynamic SQL becomes an increasing part of our workload and we want to have the same control of Dynamic SQL as we have had for Static SQL for years.” Henrik Henriksen Danske Bank Mainframe & Midrange Services Gaining Deeper Insight with DB2 12 ““I am excited about all of the great new SQL functionality that is coming with DB2 12. Things like FETCH FIRST on DELETE, simpler pagination, and of course, tons of performance enhancements, will make it easier than ever to get insight from our DB2 databases.” Craig Mullins IBM GOLD Consultant IBM Champion for Analytics
20
DB2 12 DRDA Fast Load Problem: Solution:
IBM IOD 2011 10/25/ :22 AM 10/25/2017 DB2 12 DRDA Fast Load Problem: DB2 provides the DSNUTILU stored procedure to load data from a client But this is difficult to use, app must xfer data to z/OS file Solution: DB2 Client API (CLI and CLP) for remote load into DB2 Easy/fast loading of data from file that resides on client Internal format (SAP), as well as delimited and spanned (LOB data) Overlap network operations with data ingest on the DB2 server Measured results show as fast or faster than DB2 LOAD utility zIIP eligible 5/20/16. On the latest result we got the rate of 1GB/s or 10.7 million rows/s from zLinux client. N244/S20266 DRDA Load 9/10/15: we are looking to implement DRDA fast load in JCC to be able to expose to SPARK for fast write of Spark data into DB2. CLI buffer based, CLP (ZLOAD) file based. Start load operation, stream the data buffer (SQLPutData), Stop load operation. LOBs not supported for internal format, if no LOB columns then SAP uses internal. (See previous comment on previous slide 43) Prensenter name here.ppt 20
21
DB2 12: application enablement
Several SQLPL Improvements SQLPL in triggers, including versioning and debug support SQLPL obfuscation Support for constants Dynamic SQL in SQLPL UDFs and stored procedures ARRAY and LOB global variables JSON function improvements for easier retrieval of JSON dataq
22
DB2 12: application enablement…
Enhanced MERGE support New SQL Pagination syntax Piece-wise modification of data (DELETE) XMLModify multiple update support Bi-temporal improvements Inclusive/inclusive support Temporal RI Logical transaction for system time
23
DB2 RESTful API Support Many modern application developers work with REST services and JSON data formats DB2 Adaptor for z/OS Connect provides the means to do this Available via DB2 Accessories Suite for z/OS V3R3 DB2 10 or later Future direction: native DB2 REST provider Easier DBA management of DB2 RESTful services, uses existing DRDA infrastructure z/OS Connect Enterprise Edition integration An example of what we’re providing - We are providing these services by using DB2 Adapter for z/OS connect. Here we are allowing SQL to be consumed as a service by app devs who can use RESTful APIs to invoke these services. Many modern applications today require RESTFUL APIs and JSON data formats. The DB2 Adaptor is currently (late October 2015) in Beta and we do have customers working on it and testing it out. Note, we will be making it available on V10 and V11.
24
Hear What our customers are saying DB2 12 and RESTful API support?
DB2 12 rules the API Economy “The RESTful API is yet another way where DB2 is at the leading edge – and again cementing DB2’s and the mainframes position as a full capable server in the IT infrastructure of today. Using these REST-services Mobile applications can both be built faster and run faster !” Frank Peterson Chief Architect BankData DB2 12 – RESTful API helping enterprises to be Agile “Restful API in DB2 12 for z/OS allows you to develop mobile and other apps with scalable performance in a matter of minutes. Do you want to be quick and agile ? Use Restful API in DB2 12 for z/OS” Kurt Struyf IBM GOLD Consultant IBM Champion for Analytics
25
Enhanced MERGE DB2 z/OS initial support for MERGE statement with limited functionality was delivered with Version 9: Limited to UPDATE and INSERT and only one of each Focused on use of host variable column arrays to provide multiple rows of input data In DB2 12, DB2 z/OS MERGE statement will be aligned with behavior defined in SQL Standard and DB2 family. Source data as a table-reference Multiple MATCHED clauses Additional Predicates with [NOT]MATCHED Support DELETE operation Allow IGNORE and SIGNAL See previous slide, slide 47. Now, with DB2 12, MERGE has full support.
26
SQL Pagination With the growth of web and mobile applications, application developers are looking for more efficient ways to develop good performing applications. Numeric-based pagination SELECT * FROM tab OFFSET 10 ROWS FETCH FIRST 10 ROWS ONLY Data-dependent pagination Existing syntax WHERE (LASTNAME = ‘SMITH’ AND FIRSTNAME >= ‘JOHN’) OR (LASTNAME > ‘SMITH’) New equivalent syntax WHERE (LASTNAME, FIRSTNAME) > (SMITH, JOHN) This slide is an example of SQL pagination updates implemented in DB2 12. V10 range list access. V12 new,cleaner syntax.
27
Piece-wise Modification of Data
Mitigate the effects of locking and logging when potentially millions of rows could be affected by a simple statement like: "DELETE FROM T1 WHERE C1 > 7“ Solution Allow the fetch clause to be specified on a searched delete statement DELETE FROM T1 WHERE C1 > 7 FETCH FIRST 5000 ROWS ONLY; COMMIT; With existing behavior, a single DELETE can qualify a lot of rows which creates pain points for locking as well as logging. The rollback of those SQL can be problematic. DB2 12 introduces FFNR syntax to limit the # of rows that are deleted in a single SQL stmt.
28
DBA Productivity – DB2 12 Goals
Relief for table scalability limits Simplify large table management Improve availability Agile schemas (more online schema changes) Security and compliance improvements Streamline migration process Utility performance, availability, usability
29
Partition By Range Current Limitations
Maximum table size limited to 16Tb (4k pages) or 128Tb (32k pages) Maximum number of partitions is also dependent on DSSIZE and page size E.g. if DSSIZE = 256 GB and page size = 4K then Max Parts is 64 DSSIZE is at Table Space Level not Part Level All Parts inherit the same DSSIZE set at Table Space No ability to have differing Partition sizes Altering DSSIZE requires REORG of entire tablespace Partition by range table spaces have limitations such as the maximum number of partitions is limited to 4096, the maximum table size is limited to 16tb. DSSIZE can only be set at the Table Space level so it is not possible to have different partitions with different DSSIZES but there are many situations where some partitions need to be a different size than other partitions, this is dependant on the data. If the DSSIZE needs to be altered then a REORG must be run on the entire Table Space, it can not be run on individual partions so an outage may occur when the Switch phase takes place as all partitions will be involved in the switch. IBM Analytics © 2015 IBM Corporation
30
DB2 12 Lifting the Limits New PBR tablespace structure called ‘PBR RPN’ Relative page numbers (RPN) instead of absolute Remove dependency between #partitions & partition size New RID is Relative RID Part Number stored in Partition Header Page Page number stored in Data Page, relative to start of the partition Up to 1TB Partition Size, or 4 Petabytes (PB) per table space Maximum number of rows with 4K pages increased from 1.1 to 280 Trillion @1,000 rows inserted per second, more than 8800 years to fill! Increasing DSSIZE is supported at partition-level New DSSIZE support for indexes These infrastructure changes position DB2 for future enhancements Increase in partition limits, increase number of rows per page Attribute variance by partition, schema changes via REORG PART 60+ billion is a theoretical example based on short row length Maximum number of rows with 32K pages increased from 1.1 to about 35Trillion @1,000 rows inserted per second, more than 1100 years to fill! IBM Analytics © 2015 IBM Corporation
31
DB2 12 Online Schema Improvements
Insert partition Online deferred ALTER INDEX COMPRESS YES Previously placed indexes in RBDP Option to defer column-level ALTERs Materialize through online REORG Avoid availability constraints & conflict with other deferred alters TRANSFER OWNERSHIP Insert partition – allows you to insert a new partition into the middle of the partition tablespace Online deferred – allows you to alter an index to use compression with out causing an outage Option to defer – mainly a usability and availability improvement so that you can mix alter statements. Transfer Ownership: transfer ownership to new authid/role w/o having to drop/recreate. TRANSFER OWNERSHIP statement. Security admin or object owner can do the transfer. DB, TS, table, index, view, Stogroup. Any implicitly created objects also transferred.
32
Migration & Catalog Single phase migration process
No ENFM phase New function activated through new command -ACTIVATE FUNCTION LEVEL APPLCOMPAT rules, fallback rules continue to apply BSDS conversion to support 10 byte log RBA is pre-requisite BRF is deprecated BRF pagesets still supported, but zparm & REORG options are removed Temporal RTS tables Defined in catalog, enablement is optional As a reminder, "deprecation" is the discouragement of use of some feature, design or practice, typically because it has been superseded or is no longer considered safe, without (at least for the time being) removing it from the system of which it is a part or prohibiting its use.
33
DB2 12 for z/OS Accelerated Value
Deliver desirable, consumable capabilities to the marketplace with speed and quality DB2 for z/OS is moving to a Continuous Delivery model based upon DB2 12 Why? Faster delivery of easily consumable new features Integrates perfectly with new DevOps methodologies being adopted by our users Eased deployment burden enables faster adoption of new technology Available on Replay
34
Announcing DB2 12 Utilities and Tools
Unlocking the power of DB2 12 and more Comprehensive solutions helping clients save time and money Making it faster and easier to take advantage of DB2 12 – no guesswork in support DB2 ESP clients tested IBM DB2 Utilities and Tools IBM DB2 Utilities/Tools allowed for swift and successful DB2 12 experience Self-managing DB2 systems Streamlining user tasks with improved visual experience and automated processes Agile application deployment Rapid deployment of applications and database schema changes ESP clients gave overwhelmingly positive feedback using OMPE. “This was our first time using OMPE reports and I was very glad for the capabilities.” Large U.S. Retailer
35
IBM DB2 Utilities – key to enabling DB2 function
Continuing evolution of REORG utility Diminishing importance of data re-clustering for application performance Optimizer improvements, I/O performance improvements, caching improvements, contiguous buffer pools Increasing use of IBM REORG for schema evolution Insert partition PBR RPN conversion Deferred column-level alter LOB compression Improved PBG partition management Overflow to new PBG partition to ensure successful partition-level REORG of PBGs
36
Maximizing Efficiency & Eliminating Application Impact
Improved efficiency Further reduction in CPU cost & more offload to zIIP REORG up to 57% zIIP offload LOAD up to 90% REGISTER NO option to eliminate data sharing overhead for RUNSTATS, UNLOAD COLGROUP statistics CPU cost reduced by up to 25%, elapsed time up to 15% More efficient handling of compressed data to reduce CPU and elapsed time across range of utilities REORG avoidance: Immediate increase of partition DSSIZE with PBR RPN Improved FlashCopy support Multiple DFSMS COPYPOOL support for SLBs & better messaging Improved FlashCopy handing in REORG & template support for MGMTCLAS, STORCLAS Eliminating application impact Improved LOAD utility support for sequences with automatic handling of MAXASSIGNEDVAL Online LOAD REPLACE – non-disruptive refresh of reference tables Skip invalidation of cached statements by RUNSTATS Removed recoverability restrictions for PBG table spaces
37
IBM DB2 Utilities and Tools – Moving from Automation to Self-Management
“The RUNSTATS enhancement with profiles, inline stats and optimizer ability to update, completes the picture for us. We are extremely satisfied.” Walter Janißen, ITERGO Continue to build upon existing self-management infrastructure Managing statistics in DB2 12 Direct update of statistics profiles Optimizer DDL Utility inline statistics support for USE PROFILE Automation Tool completes the cycle to detect profile changes & drive new statistics gathering PREPARE BIND REBIND Optimizer Statistics Profile Automation Tool RUNSTATS
38
IBM DB2 Management with IBM Tools
Comprehensive and up to the minute support for DB2 12 and beyond Automatic exploitation of latest capabilities Industry-leading analytics support from current tooling investment Simplified, yet powerful management of DB2 systems Consolidating information with integrated visual experience via Data Server Manager (DSM) Database Utilities Performance V12 utilities Automation Self-management V12 schema DB2 Analytics Accelerator Cloud provisioning V12 instrumentation V12 SQL tuning
39
IBM DB2 Admin Tool & DB2 Object Comparison Tool
Cloud Provisioning Subsystem lifecycle management through administration tools, DSM & z/OSMF Install, migrate, clone Application development lifecycle services Urban Code Deploy (UCD) & z/OSMF workflow integration with DB2 Administration & Object Compare tools with REST APIs DB2B (user target) DB2A (user source) IBM DB2 Admin Tool & DB2 Object Comparison Tool z/OSMF User REST call over HTTPS Workflow SQL, DB2 Commands… UCD
40
IBM DB2 Utilities and Tools – The Key to Unlocking DB2 12 and Beyond
RPN PBG ALTER REORG Stats IBM utilities and tools are key to the exploitation of new capability in DB2 Continual focus on comprehensive, efficient management of DB2 environments New solutions in support of ever-greater demands for simplicity, availability, efficiency Unquestionable support of DB2 to meet business needs Inherently suited to support DB2’s future continuous delivery model Role of IBM utilities and tools extends far beyond the simple management of data and DB2 systems Ensure the manageability and viability of DB2 systems now and into the future IBM Utilities IBM Tools
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.