Presentation is loading. Please wait.

Presentation is loading. Please wait.

Version 11.70 Overview John F. Miller III, IBM 0.

Similar presentations


Presentation on theme: "Version 11.70 Overview John F. Miller III, IBM 0."— Presentation transcript:

1 Version Overview John F. Miller III, IBM 0

2 Talk Outline Overview Storage Enhancements – Storage Provisioning – Storage Optimization – Compression Index Improvements – Forest of Tree Indexes – Create Index extent sizes – Constraint without an index Miscellaneous – Network Performance – Pre-Load C-UDRs Fragmentation / Partitioning – Interval Fragmentation – Add and Drop Fragments Online – Fragment Level Statistics Data Warehouse – Multi Index Scans – Star and Snowflake joins 1 Page 1

3 2 Storage Provisioning

4 What is Storage Provisioning To proactively or reactively add storage to eliminate out of space errors – Monitoring spaces and automatically grow a container when its free space falls below a specific amount. – Stalling an SQL which is about to fail because of insufficient space until space is allocated to the depleted container The ability to tell Informix about disk space that can be used to solve storage issues in the future – Raw Devices – Cooked Files – Directories 3

5 Benefits of Storage Provisioning "Out-of-space" errors are virtually eliminated. Manual expansion and creation of storage spaces without having to worry about where the space will come from Automatic expansion of dbspaces, temporary dbspaces, sbspaces, temporary sbspaces, and blobspaces. Feature is fully incorporated into OAT. 4

6 Storage Provisioning: The Power of 2 Two available modes: – Manual – Automatic Two available space expansion methods: – Chunk extension – Chunk creation Two available interfaces: – sysadmin task()/admin() functions (SQL interface) – OAT (Graphical interface) Page 5

7 Storage Pool Facts What is the Storage Pool – How the DBA tell’s Informix about space it can use to solve future space issues – A file, device, or directory in the pool is called an entry. There is one storage pool per IDS instance. You can add, modify, delete and purge storage pool entries. 6 EXECUTE FUNCTION task("storagepool add", “/work/dbspaces/dbs1", “0", “1GB", “100MB", “1");

8 OAT’s View of the Storagepool 7 Summary of space left in the storage pool Automatic policies

9 Extendable Chunks The ability to expand an existing chunk Default upon creation is non-expanding chunks Example of enabling the extendable property of chunk 13 A Chunk can be extended automatically or manually Example of manually extending chunk 27 by 2 GB Extending chunks do NOT consume space from the storagepool 8 EXECUTE FUNCTION task(“modify chunk extendable on”, “13”) EXECUTE FUNCTION task(“modify chunk extend”, “27”, “2GB”);

10 OAT’s View of the Chunk Pod 9 Fragmentation map of selected chunk Chunk Actions Extend a Chunk Add a new chunk Modify chunk settings Drop a chunk

11 Expanding a Storage Container Keep the addition of space to a storage container simple – The creator of a storage container specifies how a space should grow – Manual allocations of space, Just say do it Use the predefined container provisioning policies to allocated new space to a container 1. Determines if any chunk in the storage container is expandable 2. If no chunk can successfully expand, then add a new chunk 10

12 Expanding a Space in OAT Page 11

13 Creating or Dropping a Space with the Storagepool You can create a new storage container utilizing the space from the storage pool Example of create a 100MB dbspace called orders_dbs You can drop an existing storage container and return the space to the storage pool Example of dropping a dbspace called dbs1 12 EXECUTE FUNCTION ADMIN ('create dbspace FROM STORAGEPOOL', 'orders_dbs', '100M') EXECUTE FUNCTION ADMIN ('drop dbspace to storagepool', 'dbs1');

14 13 Save your Company

15 Prevent Accidental Disk Initialization Save companies from potential disasters Accidental disk re-initialization (i.e. oninit –i) New onconfig FULL_DISK_INIT 14 Value 0Only allow system initialization if page zero of rootdbs is not recognized 1Always allow system initialization After initialization, value is automatically set to 0

16 15 Storage Optimization Page 15

17 Optimizing Tables As a DBA I need to … – Reduce the number of extents a table contains – Move all rows to the beginning of a table – Return unused space at the end of a table to the system – Shrink a partial used extent at the end of a table Page 16 All this while accessing and modifying the table!!! AND While you are watching your favorite TV show

18 Storage Management Overview Defragment Extents – Combine extents reduce the Data Compression – Reduces the amount of storage taken by a single row Table Compaction – Reduce the number of pages utilized by a table Index Compaction – Ensure the index pages are kept full Automate the optimization of table storage – Applies policies to optimize tablese 17

19 The number of extents a table/partition can have has increased Defragment Extents – Moves extents to be adjacent – Merges the extents into a single extent Example Customer Extent 3 2 Optimizing Table Extents - Defragment Page 18 Customer Extent 1 Customer Extent 2 Customer Extent 5 Customer Extent 4 Orders Extent 1 Items Extent 1 Orders Extent 2 Products Extent 1 Items Extent 2 dbspace1 Customer Extent MERGE 2 3 EXECUTE FUNCTION ADMIN (‘DEFRAGMENT', ‘db1:customer') Number of extents for the customer table 543

20 Optimizing Tables and Indexes 19

21 Defragment Table Extents OnLine Page 20

22 Data Compression Reduce the space occupied by the row Compressing a table can be done online Compress either a table or fragment Custom dictionary built for each fragment to ensure highest levels of compression Tables with compressed rows are ALWAYS variable length rows Many Benefits  Smaller Archives  More data in the buffer pool  Fewer long/forwarded rows  Few I/O for same amount of data read/written execute function task(“compress table”, “tab1”,”db”) 21

23 REPACK Command Moves all rows in a table/fragment to the beginning, leaving all the free space at the end of the table Online operation, users can be modifying the table Tim Frank Chris Jamie John Steve Travis Roy Lenny Customer execute function task(“table repack”, “customer”, ”db”) 22

24 SHRINK Command Frees the space at end of table so other table can utilize this space – Entire extents are free – The last extent in a table can be partially freed – Will not shrink a table smaller than the first extent size New command to modify first extent size “ALTER TABLE MODIFY EXTENT SIZE” Online operation Tim Frank Chris Jamie John Steve Travis Roy Lenny Customer execute function task(“table shrink”, “customer”, ”db”) 23

25 Automatically Optimize Data Storage 24

26 25 Index Optimization Page 25

27 Create Index with a Specific Extent Size – The create index syntax has been enhanced to support the addition of an extent size for indexes – Better sizing and utilization Default index extent size is the – index key size / data row size * data extent size Page 26 CREATE INDEX index_1 ON tab_1(col_1) EXTENT SIZE 32 NEXT SIZE 32;

28 Create Index Extent Sizes Ability to specify the extent size when creating an index Allow for optimal space allocation Utilities such as, dbschema, report index extent size create index cust_ix1 on customer (cust_num) in rootdbs extent size 80 next size 40 ; Page 27

29 Creating Constraints without an Index Saves the overhead of the index for small child tables CREATE TABLE parent(c1 INT PRIMARY KEY CONSTRAINT parent_c1, c2 INT, c3 INT); CREATE TABLE child(x1 INT, x2 INT, x3 VARCHAR(32)); ALTER TABLE child ADD CONSTRAINT (FOREIGN KEY(x1) REFERENCES parent(c1) CONSTRAINT cons_child_x1 INDEX DISABLED); Page 28

30 29 B-Tree Index The figure above one Root Node and access to the underlying twig and leave pages are through this page, which is where there can be mutex contention Page 29

31 30 New Index Type “Forest Of Trees” Traditional B-tree index suffer from performance issues when many concurrent users access the index – Root Node contention can occur when many session are reading the same index at the same time – The depth of large B-tree index increases the number of levels created, which results in more buffer reads required Forest Of Tress (FOT) reduces some of the B-tree index issues: – Index is larger but often not deeper  Reduces the time for index traversals to leaf nodes – Index has multiple subtrees (root nodes) called buckets  Reduces root node contention by enabling more concurrent users to access the index Page 30

32 Forrest of Trees Index (FOT Index) Reduces contentions on an indexes root node Several root nodes Some B-Tree functional is NOT supported – max() and min() Page 31 create index index_2 on TAB1( C1, C2 ) hash on ( C1 ) with 3 buckets;

33 32 Check oncheck –pT information Check sysmaster database select nhashcols, nbuckets from sysindices Average Average Level Total No. Keys Free Bytes Total There are 100 Level 1 buckets (Root Nodes) FOT - Determining use - ONCHECK & SYSMASTER

34 33 Network & UDR Performance Page 33

35 Network Performance Improvements Caching network services Multiple listener threads for a single server name Multiple file descriptor servers Previous network improvements – Dynamic start and stop of listener threads – Pre-allocate users session 34 Page 34

36 Network Performance - Caching Network Services Database caching of Host, Services, Users and Groups Avoids going to the operating system for each network call Administrator defined timeout value set for network caches ONCONFIG example NS_CACHE host=900,service=900,user=900,group=900 Each cache is dynamically configurable onstat –g cache prints out how effectiveness of the caches 35 Page 35

37 Network Performance – Multiple Listeners Able to define multiple listener threads for a single DBSERVERNAME and/or DBSERVERALIAS Add the number of listeners to the end of the alias EXAMPLE – To start three listener threads for the idsserver – Modify the ONCONFIG as follows DBSERVERNAME idsserver-3 36 Page 36

38 Network Performance Results My simple network performance tests – 200 users connecting and disconnecting Connection throughput on an AIX server improved by 480% Connection throughput on a Linux server improved by 720% 37 Computer TypeWithout Improvements Utilizing Improvements AIX 642m 5s27s Linux 6410m 11s1m 20s Page 37

39 38 Improve Throughput of C User Defined Routines (C-UDR) Preloading a C-UDR shared library allows Informix threads to migrate from one VP to another during the execution of the C-UDR – Increase in performance – Balance workloads Without this feature – The C UDR shared libraries are loaded when the UDRs are first used – The thread executing the UDR is bound to the VP for the duration of the C-UDR execution PRELOAD_DLL_FILE $INFORMIXDIR/extend/test.udr PRELOAD_DLL_FILE /var/tmp/my_lib.so Page 38

40 39 Verifying the C-UDR shared library is preloaded online.log during server startup 14:23:41 Loading Module 14:23:41 The C Language Module loaded onstat –g dll new flags – ‘ P’ represents preloaded – ‘M’ represents thread can migrate Datablades: addr slot vp baseaddr flags filename 0x4b x2a985e3000 PM /var/tmp/test.udr 0x4c2bc x2a985e3000 PM 0x4c2e x2a985e3000 PM

41 40 Update Statistics Page 40

42 41 New Brand Name and Editions Simplified Installation, Migration, and Portability Flexible Grid with Improved Availability Easy Embeddability Expanded Warehouse Infrastructure Empower Applications Development Enhanced Security Management Increased Performance Other Features Agenda

43 42 Seamless installation and Smarter configuration Can migrate from Informix Version , 10.0, 9.40, or 7.31 directly to Informix Version New installation application, using the new ids_install command, makes it easier to install and configure Informix products and features – A typical installation now has improved default settings to quickly install all of the products and features in the software bundle, with preconfigured settings – The custom installation is also smarter than before and allows you to control what is installed Both types of installations allow you can create an instance that is initialized and ready to use after installation Must use a custom installation setup if you want to configure the instance for your business needs

44 43 Changes to Installation Commands Some installation commands changed – To take advantage of new and changed functionality – To improve consistency across products and operating systems Depreciated commands – installserver – installclientsdk – installconn Must use ids_install to install Informix with or without bundled software New uninstallids command – Removes the server, any bundled software, or both – To remove specific products uninstall/uninstall_server/uninstallserver uninstall/uninstall_clientsdk/uninstallclientsdk uninstall/uninstall_connect/uninstallconnect (formerly uninstallconn) uninstall/uninstall_jdbc/uninstalljdbc.exe or java -jar uninstall/uninstall_jdbc/uninstaller.jar (depending on how you install the JDBC driver)

45 44 Auto-Registration and Auto VP Creation Database extensions (formerly known as built-in DataBlade modules) are automatically registered Prerequisite tasks, such as registering the extensions or creating specialized virtual processors, no longer required The BTS, WFSVP, and MQ virtual processors are created automatically The idsxmlvp virtual processor is created automatically when an XML function is first used An sbspace is created automatically for basic text searches and spatial extensions, if a default sbspace does not exist Basic Text Search, Web Feature Service, Node, Spatial, Binary, Large Object Locator, Timeseries, MQ Messaging, and Informix web feature service now be used without first registering them in your database

46 45 dbschema and dbexport Enhancements dbschema and dbexport utility enhancement for omitting the specification of an owner – Can use the new –nw option to generate the SQL for creating an object without specifying an owner

47 46 Generating Storage Spaces and Logs with dbschema Can now generate the schema of storage spaces, chunks, and physical and logical logs with the dbschema utility Choose to generate: – SQL administration API commands dbschema -c dbschema1.out – onspaces and onparams utility commands dbschema -c –ns dbschema2.out For migrations, generate the schema before unload data using the dbexport and dbimport utilities # Dbspace 1 -- Chunk 1 EXECUTE FUNCTION TASK ('create dbspace', 'rootdbs', '/export/home/informix/data/rootdbs', '200000', '0', '2', '500', '100') # Dbspace 2 -- Chunk 2 EXECUTE FUNCTION TASK ('create dbspace', 'datadbs1', '/export/home/informix/data/datadbs', ' ', '0', '2', '100', '100') # Physical Log EXECUTE FUNCTION TASK ('alter plog', 'rootdbs', '60000') # Logical Log 1 EXECUTE FUNCTION TASK ('add log', 'rootdbs', '10000') # Dbspace 1 -- Chunk 1 onspaces -c -d rootdbs -k 2 -p /export/home/informix/data/rootdbs -o 0 -s en 500 -ef 100 # Dbspace 2 -- Chunk 2 onspaces -c -d datadbs1 -k 2 -p /export/home/informix/data/usrdbs -o 0 -s en 100 -ef 100 # Logical Log 1 onparams -a -d rootdbs -s SQL administration API format onspaces/onparams format

48 47 Support for the IF EXISTS and IF NOT EXISTS keywords Now you can include the IF NOT EXISTS keywords in SQL statements that create a database object (or a database) You can also include the IF EXISTS keywords in SQL statements that destroy a database object (or a database) – If the condition is false, the CREATE or DROP operation has no effect, but no error is returned to the application Simplifies the migration to Informix of SQL applications that were originally developed for other database servers that support this syntax

49 48 Simplified SQL syntax for Defining Database Tables No more restrictions on the order in which column attributes can be defined in DDL statements – Simplifies the syntax rules for column definitions in the CREATE TABLE and ALTER TABLE statements The specifications for default values can precede or follow any constraint definitions List of constraint definitions can also be followed (or preceded) by the default value, if a default is defined on the column The NULL or NOT NULL constraint does not need to be listed first if additional constraints are defined Simplifies the migration to Informix of SQL applications that were originally developed for other database servers that support this syntax

50 49 Stored Procedure Debugging (SPD) Need for application developers to debug SPL procedures in Informix when necessary – Should be able to execute the SPL routine line by line, stepping into nested routines, analyzing the values of the local, global and loop variables – Should be able to trace the execution of SPL procedures Trace output should show the values of variables, arguments, return values, SQL and ISAM error codes Pre-requisites – Informix or above – Integration with the Optim Data Studio procedure debugger – Integration with Microsoft Visual Studio debugger DRDA must be enabled

51 50 SPD - Supported Commands Breakpoints Run Step Over Step Into (for nested procedures) Step Return Get Variable value Set Variable value

52 51 Explicit PDQ vs Implicit PDQ Explicit PDQ – User setting ( SET PDQPRIORITY statement) – All queries in current session use same setting Implicit PDQ – IDS determines resource requirement based on optimizer's estimates – Each query can have different PDQ setting

53 52 Implicit PDQ - Enable SET ENVIRONMENT IMPLICIT_PDQ ON/OFF – Enable/disable implicit PDQ for current session – When enabled Informix automatically determines an appropriate PDQPRIORITY value for each query Informix ignores explicit PDQ setting unless BOUND_IMPL_PDQ is also set – When disabled Informix does not override the current PDQPRIORITY setting SET ENVIRONMENT BOUND_IMPL_PDQ ON/OFF – Use explicit PDQPRIORITY setting as upper bound when calculating implicit PDQ setting

54 53 New Brand Name and Editions Simplified Installation, Migration, and Portability Flexible Grid with Improved Availability Easy Embeddability Expanded Warehouse Infrastructure Empower Applications Development Enhanced Security Management Increased Performance Other Features Agenda

55 54 Deployment Assistant (DA) – Self Configuring Enables users to easily package snapshots of Informix instances and/or their data, in preparation for deployment – In past releases a snapshot had to be manually created Built-in intelligence to capture and configure an Informix snapshot more easily – Allows for reduction of the packaged instances to the user's minimum desired configuration Graphical User Interface (GUI) developed in Java/Eclipse SWT ifxdeployassist command – Starts the deployment assistant interface, which prompts for the required information to capture the instance

56 55 Deployment Assistant (DA) – Packages Produces packages that are ready for use by the Deployment Utility (DU) Build a package containing – Informix – (Optional) pre-built database(s) – (Optional) applications Compress the package without using 3rd party compression tools (BZIP2, GZIP, TAR, and ZIP) Deploy, decompress, and install the package on multiple systems Good for media distribution such as CDs Supported on Windows and Linux No current support for data on RAW devices

57 56 Deployment Assistant (DA) – Usage To run the Deployment Assistant, run the following command in /bin: ifxdeployassist On Windows, executing this command with the INFORMIXSERVER environment variable set will trigger automatic detection of the instance specified

58 57 Deployment Utility (DU) – New Options IDS xC6 – Available on all platforms – ifxdeploy – Can deploy a pre-configured snapshots of an IDS instances on one or more machines by unzipping an archive, creating users, updating configuration files, setting file permissions, and so on – Can create new instances from existing ones or from onconfig.std; or uninstall instances – Chunks can be dynamically relocated to a new path at deployment time New command line options (11.70) – -start option will start the Informix instance after deployment and wait for it to be initialized (equivalent to running oninit –w) Optionally add a number of seconds to wait before returning the command The Deployment Utility configuration file has a new option START – -autorecommend option calculates optimal values for Informix configuration parameters based on planned usage for the instance and the host environment

59 58 Deployment Utility (DU) – Example To deploy a zipped tar file of an instance that: – Prints verbose messages – Sets the SERVERNUM to 2 – Relocates the chunks to “/work/chunks” – Sets new TCP listening ports – Starts the instance after deployment export INFORMIXDIR=/work/ixdir2; export INFORMIXSERVER=ixserver2; ifxdeploy -file /work/snapshots/ifxdir.tgz -verbose -servernum 2 -relocate /work/chunks -rootpath /work/chunks -sqliport drdaport start –y To create and start a new instance using an existing INFORMIXDIR : export INFORMIXDIR=/work/ixdir; export INFORMIXSERVER=ixserver2; ifxdeploy -servernum 2 -sqliport drdaport start –y

60 59 Unique Event Alarms Informix uses the event alarm mechanism to notify the DBA about any major problems in the database server Default alarm program scripts – UNIX $INFORMIXDIR/etc/alarmprogram.sh – Windows %INFORMIXDIR%\etc\alarmprogram.bat ONCONFIG parameters – ALARMPROGRAM – SYSALARMPROGRAM

61 60 Unique Event Alarms - Overview Informix has 79 Event Class IDs For each of these event alarm class, there could be multiple specific messages used by an event alarm class In previous releases, not easy differentiating between one type of event alarm vs. another for the same event alarm class – Required the user to parse the specific message string which goes with the alarm program as one of its parameters Very inconvenient for applications which deeply embed IDS Panther provides unique numerical values for each specific message – Applications can interpret and take actions against each event alarm

62 61 Programmability Enhancements Consistent return codes for server initialization (oninit) Very helpful for application which administer Informix in deep embedded environments The application can take the appropriate action to bring the instance On- Line successfully During server initialization in embedded environments, the application may have to take actions for: – Shared memory creation/initialization failed – Could not find libelf/libpam/… – Incorrect command line syntax – Error reading/updating onconfig – Error calculating defaults in onconfig – Incorrect serial number – Not DBSA – Incorrect SQLHOSTS entries #!/bin/sh # Execute the oninit program oninit #Get the return code from oninit execution RC=$? # Validate the retun code and take necessary action case $RC in 0) echo "RC=0: The database server was initialized successfully." ;; 1) echo "RC=1: Server initialization has failed." ;; 187) echo "RC=187: Check the entries in sqlhosts file." ;; 221) echo "RC=221: DUMPDIR missing. Creating DUMPDIR." mkdir $INFORMIXDIR/tmp chmod 770 $INFORMIXDIR/tmp ;; *) echo "Return Code=$RC !" ;; esac Sample Shell Script

63 62 Embeddabillity – Other Features Automated DB Scheduler tasks added – Automatic notification when IDS marks an index “bad” – Automatic table storage optimization based on user settable parameters Informix Embeddability toolkit – Tutorial for creating an end to end embeddability scenario – Example scripts for using Deployment Assistant/Utility Install and Deployment API’s – API’s to install and configure Informix from your application

64 63 Enhanced Security Management Page 63

65 64 Selective Row Level Auditing (SRLA) onaudit – Manages audit masks and configuration – Need to be DBSSO or AAO – DBSSO can perform functions related to audit setup – AAO can perform functions related to audit analysis – Examples onaudit –l 1 onaudit –c onaudit –a –u sqlqa –e +RDRW onshowaudit – Lets AAO extract information from an audit trail – Example: onshowaudit –n

66 65 Selective Row Level Auditing (SRLA) – What’s New? Previously, there was no way to enable auditing so that it excluded audit events on tables that you did not want to monitor with the onaudit utility – Enabling can produce huge amounts of useless data The database system security officer (DBSSO) can now configure auditing so that row-level events are recorded for designated tables – Versus for ALL tables used by the database server Ability to select only the tables that you want to audit on the row level – Can improve database server performance, simplify audit trail records, and mine audit data more effectively

67 66 SRLA – Syntax New table level property added ( AUDIT ) – CREATE TABLE {existing syntax} [with AUDIT]; – ALTER TABLE {existing syntax} [add AUDIT]; [drop AUDIT]; ADTROWS – New parameter to Audit configuration file - adtcfg – 0 NO changes in existing row level auditing behavior (default) – 1 SRLA is enabled and only "audit" enabled tables Will generate row-level audit records

68 67 Trusted Context – Why use it? Trusted Context is a feature developed by DB2 Allow connection reuse under a different userid with authentication to avoid the overhead of establishing a new connection Allow connection reuse under a different userid without authentication – Accommodate application servers that need to connect on behalf of an end-user but do not have access to that end- user’s password to establish a new connection on their behalf Allow users to gain additional privileges when their connection satisfies certain conditions defined at the database server

69 68 Trusted Context – What is it? Database object created by the database security administrator (DBSECADM) – Defines a set of properties for a connection that when met, allow that connection to be a “trusted connection” with special properties The connection must be established by a specific user The connection must come from a trusted client machine The port over which the connection is made must have the required encryption If the above criteria are met, the connection will allow changes in userid and privileges as defined in the trusted context

70 69 Trusted Context – Steps Step 1: Create Trusted Context Objects – Created at database level – Must be created by DBSECADM before Trusted Connections can be established – Can use OS users or Mapped Users Step 2: Establish Trusted Connections – Must satisfy criteria defined in Trusted Context – Provision to Switch User – Use transactions within switched user session

71 70 Trusted Context – Steps CREATE TRUSTED CONTEXT CTX1 BASED UPON CONNECTION USING SYSTEM AUTHID BOB DEFAULT ROLE MANAGER ENABLE ATTRIBUTES (ADDRESS ' ') WITH USE FOR JOE, MARY WITHOUT AUTHENTICATION Creates an Trusted Context object named CTX1 Will allow connections from Can switch to user Joe or Mary once Trusted Connection established

72 71 Trusted Context – Switching Users Switch to any user defined in the Trusted Context Object scope Perform database operations Audit records will show the switched user as the originator of the operations If using transactions, commit or rollback before switching to a new user

73 72 Informix Mapped Users Can now configure Informix so that users no longer require operating system accounts to connect – Allows users authenticated by an external authentication service (such as Kerberos or Microsoft Active Directory) to connect to Informix When a DBSA turns on the USERMAPPING parameter of the onconfig file and maps externally authenticated users to user properties in tables of the SYSUSER database Onconfig variable – USERMAPPING OFF|ADMIN|BASIC

74 73 Informix Mapped Users – Example grant access to bob properties user fred; – This means that when 'bob' connects to Informix, as far as the operating system access is concerned, Informix will use the UID, GID(s) and home directory for user 'fred' (which must be a user name known to the o/s) grant access to bob properties user fred, group (ifx_user), userauth (dbsa); – This is similar to the previous entry. User 'bob' will use UID 3000 ('fred') and GIDs 3000 (users), 200 (staff) and the extra group 1000 (ifx_user) – Additionally, assuming that USERMAPPING is set to ADMIN in the ONCONFIG file, then 'bob' will be treated as a DBSA

75 74 New Brand Name and Editions Simplified Installation, Migration, and Portability Flexible Grid with Improved Availability Easy Embeddability Expanded Warehouse Infrastructure Empower Applications Development Enhanced Security Management Increased Performance Other Features Agenda

76 75 What is a Flexible Grid? A named set of interconnected servers for propagating commands from an authorized server to the rest of the servers in the set Useful if you have multiple servers and you often need to perform the same tasks on every server The following types of tasks are easily run through a grid: – Administering servers – Updating the database schema and the data – Running or creating stored procedures or UDRs – Managing and Maintaining replication

77 76 What are the features of the new Informix Flexible Grid? Nodes in grid do not have to be identical – Different tables, different hardware, different OS’s, different IDS versions Simplify creation and maintenance of a global grid – Create grid, attach to grid, detach from grid, add/drop node to/from Grid – DDL/DML operations on any node propagated to all nodes in the Grid – Management of grid can be done by any node in the grid – Tables no longer require primary keys – Integration with OpenAdmin Tool (OAT)

78 77 Define/Enable/Disable the Grid The GRID is managed by using the cdr utility Define – Defines the nodes within the grid cdr define grid --all cdr define grid Enable – Defines the nodes within the grid which can be used to perform a grid level operation – Also is used to determine which users are allowed to perform the grid operation cdr enable grid –grid= --user= -- node= Disable – Used to remove a node or user from being able to perform grid operations cdr disable grid –grid= --node= cdr disable grid –grid= --user= cdr disable grid –g -n -u OAT support enabled

79 78 Propagating database object changes Can make changes to database objects while connected to the grid and propagate the changes to all the servers in the grid Can propagate creating, altering, and dropping database objects to servers in the grid The grid must exist and the grid routines must be executed as an authorized user from an authorized server To propagate database object changes: – Connect to the grid by running the ifx_grid_connect() procedure – Run one or more SQL DDL statements – Disconnect from the grid by running the ifx_grid_disconnect() procedure

80 79 Example of DDL propagation execute procedure ifx_grid_connect(‘grid1’, ‘tag1’); create database tstdb with log; create table tab1 ( col1int primary key, col2int, col3char(20)) lock mode row; create index idx1 on tab1 (col2); create procedure loadtab1(maxnum int) define tnum int; for tnum = 1 to maxnum insert into tab1 values (tnum, tnum * tnum, ‘mydata’); end for: end procedure; execute procedure ifx_grid_disconnect(); Will be executed on all nodes within the ‘grid1’ GRID

81 80 Monitoring a Grid cdr list grid – View information about server in the grid – View the commands that were run on servers in the grid – Without any options or a grid name, the output shows the list of grids Servers in the grid on which users are authorized to run grid commands are marked with an asterisk (*) When you add a server to the grid, any commands that were previously run through the grid have a status of PENDING for that server Options include: --source= --summary --verbose --nacks --acks --pending cdr list grid grid1 NEW: Monitor a cluster onstat -g cluster

82 81 Informix Flexible Grid – Requirements Requirements – Enterprise Replication must be running – Servers must be on Panther (11.70.xC1) Pre-panther servers within the ER domain cannot be part of the GRID

83 82 Informix Flexible Grid Quickly CLONE a Primary server Previously, to clone the Primary – Create a level-0 backup – Transfer the backup to the new system – Restore the image – Initialize the instance ifxclone utility – Clones a primary server with minimal setup and configuration – Starts the backup and restore processes simultaneously No need to read or write data to disk or tape – Can create a standalone server or a remote standalone secondary server – Add a server to a replication domain by cloning – Requires the DIRECT_IO configuration parameter to be set to 0 on both the source and target servers – Data is transferred from the source server to the target server over the network using encrypted SMX Connections

84 83 Informix Flexible Grid DDL on Secondary servers Can now automate table management in high-availability clusters by running Data Definition Language (DDL) statements on all servers Can run most DDL statements such as CREATE, ALTER, and DROP on secondary servers In previous releases, only Data Manipulation Language (DML) statements could be run on secondary servers

85 84 Replicate tables without primary keys No longer require a Primary Keys for tables replicated by Enterprise Replication (ER) Use the WITH ERKEY keyword when defining tables – Creates shadow columns (ifx_erkey_1, ifx_erkey_2, and ifx_erkey_3) – Creates a new unique index and a unique constraint that ER uses for a primary key For most database operations, the ERKEY columns are hidden – Not visible to statement like SELECT * FROM tablename; – Seen in DB-Access - Table Column information – Included in the number of columns (ncols) in the systables system catalog To view the contents of the ERKEY columns SELECT ifx_erkey_1, ifx_erkey_2, ifx_erkey_3 FROM customer; Example CREATE TABLE customer (id INT) WITH ERKEY;

86 85 Transaction Survival during Cluster Failover Can now configure servers in a high-availability cluster environment to continue processing transactions after failover of the primary server – Transactions running on secondary servers are not affected – Transactions running on the secondary server that becomes the primary server are not affected – Transactions running on the failed primary server are terminated Benefits – Reduce application development complexity Design applications to run on any node – Reduce application maintenance Reduce the application downtime of cleanup and restarting the application after a failover

87 86 Transaction Survival – Configuration FAILOVER_TX_TIMEOUT – Maximum number of seconds the server waits before rolling back transactions after failure of the primary server – 0 Disable transaction survival (default value) – > 0 Enable transaction survival, 60 seconds seems reasonable On failover node, maximum time to wait for secondary nodes to reconnect before rollback On surviving secondary node, maximum time to wait before returning error to user. (-1803/-7351).

88 87 Fragmentation Page 87

89 Two New Fragmentation Schemes List Fragmentation – Fragments data based on a list of discrete values – Helps in logical segregation of data – Useful when a table has finite set of values for the fragment key and queries on table have equality predicate on the fragment key Interval Fragmentation – Fragments data based on an interval (numeric or time) value – Tables have an initial set of fragments defined by a range expression – When a row is inserted that does not fit in the initial range fragments, Informix automatically creates a fragment to hold the row 88 Page 88

90 89 List Fragmentation Fragments data based on a list of discrete values – e.g. states in the country or departments in an organization Table below is fragmented on column “state” – also known as fragment key or partitioning key CREATE TABLE customer (cust_id INTEGER, name VARCHAR(128), street VARCHAR(128), state CHAR(2), zipcode INTEGER, phone CHAR(12)) FRAGMENT BY LIST (state) PARTITION p0 VALUES ("WA","OR", "AZ") in rootdbs, PARTITION p1 VALUES ("CA") in rootdbs, PARTITION p2 VALUES (NULL) in rootdbs, PARTITION p4 REMAINDER in rootdbs; Fragment Key List Values Page 89

91 90 Details of Interval Fragmentation Fragments data based on an interval (numeric or time) value Table’s initial set of fragment(s) are defined by a range expression When a row is inserted that does not fit in the initial range fragments – Informix automatically creates a fragment to hold the row – No exclusive access required for fragment addition – No DBA intervention Purging a range can be done with a detach and drop – No exclusive access is required If dbspace selected for the interval fragment is full or down, Informix will skip those dbspaces and select the next one in the list Page 90

92 91 Example of Interval Fragmentation with Integers 91 CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR) FRAGMENT BY RANGE (order_id) INTERVAL( ) STORE IN (dbs1, dbs2, dbs3) PARTITION p0 VALUES < in rootdbs; Fragment Key Interval Expression List of DBSpaces Initial Value Page 91

93 92 Example of Interval Fragmentation with Dates CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR) FRAGMENT BY RANGE (order_date) INTERVAL( NUMTOYMINTERVAL(1,'MONTH')) STORE IN (dbs1, dbs2, dbs3) PARTITION p0 VALUES < DATE('01/01/2010') in rootdbs; Fragment Key Interval Expression List of DBSpaces Initial Value Page 92

94 93 Usage Example of Interval Fragmentation with Dates CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR) FRAGMENT BY RANGE (order_date) INTERVAL( NUMTOYMINTERVAL(1,'MONTH')) STORE IN (dbs1, dbs2, dbs3) PARTITION p0 VALUES < DATE('01/01/2010') in rootdbs; What happens when you insert into order_date “07/07/2010”? A new fragment is automatically allocated for the month of July and will hold values 7/1/2010 through 7/31/2010 What happens when you insert into order_date “10/10/2009”? The value is insert into the existing partition p0 Page 93

95 New Fragmentation Schemes Supported by OAT’s Schema Manager 94 Page 94

96 95 Update Statistics Page 95

97 Update Statistics Improvements For Fragmented Tables New finer granularity of statistics for fragmented tables – Statistics are calculated at the individual fragment level – Controlled by new table property STATLEVEL UPDATE STATITICS no longer has to scan the entire table after an ATTACH/DETACH of a fragment – Using the new: UPDATE STATISTICS FOR TABLE... [AUTO | FORCE] – For extremely large tables, substantial Informix resources can be conserved by updating only the subset of fragments with stale statistics Can also specify the criteria by which stale statistics are defined – Using the new STATCHANGE property Page 96

98 Update Statistics Improvements Each table/fragment tracks the number of update, deletes and inserts New table property statchange and statlevel – statchange Change percentage of a table/fragment before statistics or distributions will be updated – Statlevel Specifies the granularity of distributions and statistics TABLE, FRAGMENT, AUTO Fragment level statistics and distributions – Stored at the fragment level – Only fragments which exceed the statchange level are re-evaluated – Detaching or Attaching a fragment can adjust the table statistics without have to re-evaluated the entire table Page 97

99 Improving Update Statistics Fragment Level Statistics – When attaching a new fragment only the new fragments needs to be scanned, not the entire table – Only fragments which have expired statistics are scanned Defining statistics expiration policies at the table level Detailed tracking of modification to each table and fragment Automatically skipping tables or fragments whose statistics are not expired ANSI database implicitly commit after each index/column statistics is updated 98 Page 98

100 Table Level Optimizer Statistics Policies STATLEVEL – Defines the granularity or level of statistics created for a table STATCHANGE – Percentage of the table modified before table/fragment statisics are considered expired 99 CREATE TABLE …. STATLEVEL [ TABLE | FRAGMENT | AUTO ] STATCHANGE Page 99

101 100 Fragment Level Statistics – STATLEVEL clause Defines the granularity of statistics created for a table TABLE – Entire table is read and table level statistics stored in sysdistrib catalog FRAGMENT – Each fragment has its own statistics which are stored in the new sysfragdist catalog AUTO (default option for all tables) – System automatically determines the STATLEVEL – FRAGMENT is chosen if: Table is fragmented by EXPRESSION, INTERVAL or LIST and Table has more than a million rows Otherwise, mapped to TABLE Page 100

102 101 Fragment Level Statistics – STATCHANGE property Threshold applied at a table or fragment level to determine if existing statistics are considered expired Valid values for STATCHANGE is an integer between 0 and 100 Can be set for: – Entire server using new ONCONFIG parameter STATCHANGE (default 10%) – Session level using SET ENVIRONMENT STATCHANGE value – Table level by specifying STATCHANGE property in CREATE or ALTER TABLE statement Order of precedence for STATCHANGE 1. Table Property 2. Session setting 3. ONCONFIG setting 4. Default value (10%) Page 101

103 102 New Syntax for Update Statistics AUTO – Only tables or fragments having expired statistic are re-calculated FORCE – All indexes and columns listed in the command will have their statistics re-calculated Default behavior is set by AUTO_STAT_MODE parameter – Enabled by default (i.e. AUTO) UPDATE STATISTICS FOR TABLE …. [ AUTO | FORCE ] Page 102

104 103 Data Warehouse

105 Multi Index Scan Utilize multiple indexes in accessing a table Example, the following indexes exist on a table – Index idx1 on tab1(c1) – Index idx2 on tab1(c2) The server does the following: – Index scans on idx1 and idx2 – Combines the results – Looks up rows satisfying both index scans New “Skip Scan” is used Looks like sequential scan, but only reads the required rows SELECT * FROM tab1 WHERE c1 = 27 AND c2 BETWEEN 77 AND 88 Page 104

106 OAT’s View of Multi Index Scan Page 105

107 106 Star Join for Snowflake Queries Star join is a new query processing method Improves query performance for star-schema queries Requires multi-index scan and skip scan Snowflake schema is an extension of star schema with multiple levels of dimension tables Uses bitmap technology internally for efficient removal of unnecessary data rows Uses pushdown technology Page 106

108 107 PDHJ Dim Tab Scan XCHG Fact Tab Scan pk Uses alternate parent mechanism Requires PDQ (exchange iterator) Rest of iterator tree What is a Push Down Hash Join – How it Works! Discard Fact table rows early Reduce number of Fact table rows accessed based on predicates on dimension tables Take advantage of multiple foreign key indexes on Fact table – rely on multi- index scan Use hashing when index is absent Page 107

109 Snow Flake and Star Joins

110 New Brand Name and Editions Simplified Installation, Migration, and Portability Flexible Grid with Improved Availability Easy Embeddability Expanded Warehouse Infrastructure Empower Applications Development Enhanced Security Management Increased Performance Other Features Agenda

111 110 A sequential scan of large tables which read pages in parallel from disk and store in private buffers in Virtual memory Advantages of light scans for sequential scans: – Bypass the overhead of the buffer pool when many pages are read – Prevent frequently accessed pages from being forced out of the buffer pool – Transfer larger blocks of data in one I/O operation (64K/128K platform dependent) Conditions to invoke light scans: – The optimizer chooses a sequential scan of the table – The number of table pages > number of buffers in the buffer pool – The isolation level obtains no lock or a shared lock on the table RTO_SERVER_RESTART automatically enables light scans Monitor using onstat -g scn What are Light Scans (Recap)?

112 111 Can now enable Informix to perform light scans on: VARCHAR, LVARCHAR, NVARCHAR Compressed tables Any table with rows larger than a page Tables now only have to be greater than 1 MB in size – Versus greater than the size of the BUFFERPOOL Light Scan for fixed length rows already enabled Enable: – Environment: export IFX_BATCHEDREAD_TABLE=1 – ONCONFIG file: BATCHEDREAD_TABLE 1 – Session: SET ENVIRONMENT IFX_BATCHEDREAD_TABLE ‘1’; Light Scan Support for All Data Types (11.50.xC6)

113 112 Automatic light scans on tables – Informix now automatically performs light scans when appropriate – No longer have to set configuration parameters to enable Informix to perform these scans New BATCHEDREAD_INDEX configuration parameter – Enables the optimizer to automatically fetch a set of keys from an index buffer – Reduces the number of times a buffer is read Light Scan Support for All Data Types (11.70.xC1)

114 113 onstat -g lsc – Displays information based on pages scanned of large data tables, when the BATCHEDREAD_TABLE configuration parameter or the IFX_BATCHEDREAD_TABLE environment option is not enabled Note: this is depreciated onstat -g scn – Displays the status of all light scans starting in FC6 RSAM batch sequential scan info SesID Thread Partnum Rowid Rows Scan'd Scan Type Lock Mode Notes c 924 Buffpool Slock+Test c 1260 Buffpool Slock+Test c 1092 Buffpool Slock+Test c 1092 Buffpool Slock+Test Light Scan Support for All Data Types (11.70.xC1)

115 114 Other Performance Enhancements Automated DB Scheduler tasks added to help with Performance – Timeout users that have been idle for too long” – Automatically allocate CPU VPs to match hardware/licensing when IDS starts – Alerts for tables that have outstanding in-place – Ability to configure the automatic compressing, shrinking, repacking, and defragmenting of tables and extents Large Page Support on Linux – Previously, only AIX and Solaris systems were supported – The use of large pages can provide performance benefits in large memory configurations

116 New Brand Name and Editions Simplified Installation, Migration, and Portability Flexible Grid with Improved Availability Easy Embeddability Expanded Warehouse Infrastructure Empower Applications Development Enhanced Security Management Increased Performance Other Features Agenda

117 116 Prevent accidental disk initialization of an instance FULL_DISK_INIT configuration parameter – Specifies whether or not the disk initialization command (oninit -i) be executed in an Informix instance when a page zero exists at the root path location – Prevents accidental initialization of an instance or another instance when the first page of the first chunk (page zero) exists at the root path location – Page zero, which is created when Informix is initialized, is the system page that contains general information about the server Values – 0 The oninit -i command runs only if there is not a page zero at the root path location – 1 The oninit -i command runs under all circumstances Also resets the FULL_DISK_INIT configuration parameter to 0 after the disk initialization

118 117 Tool for collecting data for specific problems New ifxcollect tool to collect diagnostic data if necessary for troubleshooting a specific problem – Such as an assertion failure Can also specify options for transmitting the collected data via the File Transfer Protocol (FTP) Located in the $INFORMIXDIR/bin directory Output files located in the $INFORMIXDIR/isa/data directory Examples – To collect information for a general assertion failure ifxcollect –c af –s general – To collect information for a performance problem related to CPU utilization ifxcollect –c performance –s cpu – To include FTP information, specify the additional information -f -e -p f -m machine -l /tmp -u user_name -w password

119 118 Backup to Cloud – Overview Support for backup of Informix data to Amazon Simple Storage Service (S3) cloud storage system and restore from it by using ontape backup and restore utility Benefits – Simplifies the process of Informix data backup to an off-site S3 storage location, which can be accessed from anywhere on the web – Scalable storage capacity to match the growth in Informix backup data (within backup object size limit imposed by S3) – Reliable storage system through SLA provided by S3 – Pay-as-you-go model can provide cost-effective Informix backup solution

120 119 Backup to Cloud – How Backup works? 1. ontape backs up the data to a file in local directory 2. ontape starts the Cloud Client and waits for it to finish 3. The Cloud Client transfers the backup file from local directory to S3 4. The Cloud Client returns its execution status back to ontape, for ontape to finish running 5. ontape starts the Cloud Client and waits for it to finish 6. The Cloud Client retrieves backup file from S3 into local directory 7. ontape restores the server from the local file

121 120 Websphere MQ

122 121 shippingreq Order Entry Application Shipping Application Websphere MQ creditque Informix Dynamic Server MQ Functions Transaction mgmt Simplified Interface SQL based program SQL based MQ access 2-phase commit Credit processing recvdnotify Inventory Prior to IDS Panther (11.50 and earlier) Functions to: Send Receive Publish Subscribe Abstract Use a virtual table to map a queue to a table Send and receive via INSERT and SELECT Send strings, documents (CLOB/BLOB)

123 122 Order Entry Application Shipping Application Simplified Interface SQL based program SQL based MQ access 2-phase commit Credit processing WITH MQ Enhancements in Informix Inventory IDS DB2 Websphere MQ shippingreq Websphere MQ shippingreq Websphere MQ shippingreq Oracle Support distributed topology for IDS and Websphere MQ Server based MQ messaging Client based MQ messaging Support multiple Queue Managers within a single transactio New Functions

124


Download ppt "Version 11.70 Overview John F. Miller III, IBM 0."

Similar presentations


Ads by Google