Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fragmentation / Partitioning

Similar presentations


Presentation on theme: "Fragmentation / Partitioning"— Presentation transcript:

0 Version Overview John F. Miller III, IBM

1 Fragmentation / Partitioning
Talk Outline 11.70 Overview Storage Enhancements Storage Provisioning Storage Optimization Compression Index Improvements Forest of Tree Indexes Create Index extent sizes Constraint without an index Miscellaneous Network Performance Pre-Load C-UDRs Fragmentation / Partitioning Interval Fragmentation Add and Drop Fragments Online Fragment Level Statistics Data Warehouse Multi Index Scans Star and Snowflake joins 1 Page 1

2 4/11/ :33 AM Storage Provisioning 2 2 2

3 What is Storage Provisioning
To proactively or reactively add storage to eliminate out of space errors Monitoring spaces and automatically grow a container when its free space falls below a specific amount. Stalling an SQL which is about to fail because of insufficient space until space is allocated to the depleted container The ability to tell Informix about disk space that can be used to solve storage issues in the future Raw Devices Cooked Files Directories

4 Benefits of Storage Provisioning
"Out-of-space" errors are virtually eliminated. Manual expansion and creation of storage spaces without having to worry about where the space will come from Automatic expansion of dbspaces, temporary dbspaces, sbspaces, temporary sbspaces, and blobspaces. Feature is fully incorporated into OAT.

5 Storage Provisioning: The Power of 2
Two available modes: Manual Automatic Two available space expansion methods: Chunk extension Chunk creation Two available interfaces: sysadmin task()/admin() functions (SQL interface) OAT (Graphical interface)

6 What is the Storage Pool
Storage Pool Facts What is the Storage Pool How the DBA tell’s Informix about space it can use to solve future space issues A file, device, or directory in the pool is called an entry. There is one storage pool per IDS instance. You can add, modify, delete and purge storage pool entries. EXECUTE FUNCTION task("storagepool add", “/work/dbspaces/dbs1", “0", “1GB", “100MB", “1");

7 OAT’s View of the Storagepool
Automatic policies Summary of space left in the storage pool

8 Extendable Chunks The ability to expand an existing chunk
Default upon creation is non-expanding chunks Example of enabling the extendable property of chunk 13 A Chunk can be extended automatically or manually Example of manually extending chunk 27 by 2 GB Extending chunks do NOT consume space from the storagepool EXECUTE FUNCTION task(“modify chunk extendable on”, “13”) EXECUTE FUNCTION task(“modify chunk extend”, “27”, “2GB”);

9 OAT’s View of the Chunk Pod
Fragmentation map of selected chunk Chunk Actions Extend a Chunk Add a new chunk Modify chunk settings Drop a chunk

10 Expanding a Storage Container
Keep the addition of space to a storage container simple The creator of a storage container specifies how a space should grow Manual allocations of space, Just say do it Use the predefined container provisioning policies to allocated new space to a container Determines if any chunk in the storage container is expandable If no chunk can successfully expand, then add a new chunk

11 Expanding a Space in OAT

12 Creating or Dropping a Space with the Storagepool
You can create a new storage container utilizing the space from the storage pool Example of create a 100MB dbspace called orders_dbs You can drop an existing storage container and return the space to the storage pool Example of dropping a dbspace called dbs1 EXECUTE FUNCTION ADMIN ('create dbspace FROM STORAGEPOOL', 'orders_dbs', '100M') EXECUTE FUNCTION ADMIN ('drop dbspace to storagepool', 'dbs1');

13 4/11/ :33 AM Save your Company 13

14 Prevent Accidental Disk Initialization
Save companies from potential disasters Accidental disk re-initialization (i.e. oninit –i) New onconfig FULL_DISK_INIT Value Only allow system initialization if page zero of rootdbs is not recognized 1 Always allow system initialization After initialization, value is automatically set to 0

15 4/11/ :33 AM Storage Optimization 15 Page 15 15 15

16 Optimizing Tables As a DBA I need to … Reduce the number of extents a table contains Move all rows to the beginning of a table Return unused space at the end of a table to the system Shrink a partial used extent at the end of a table All this while accessing and modifying the table!!! AND While you are watching your favorite TV show Page 16

17 Storage Management Overview
Defragment Extents Combine extents reduce the Data Compression Reduces the amount of storage taken by a single row Table Compaction Reduce the number of pages utilized by a table Index Compaction Ensure the index pages are kept full Automate the optimization of table storage Applies policies to optimize tablese 17

18 Optimizing Table Extents - Defragment
dbspace1 The number of extents a table/partition can have has increased Defragment Extents Moves extents to be adjacent Merges the extents into a single extent Example Customer Extent 1 Orders Extent 1 Customer Extent 2 Items Extent 1 Customer Extent 3 2 Orders Extent 2 Customer Extent 4 3 2 Defragmenting an index brings the entries closer together You can improve performance by defragmenting partitions to merge non-contiguous extents. A frequently updated table can become fragmented over time which degrades the performance every time the table is accessed by the server. Defragmenting a table brings data rows closer together and avoids partition header page overflow problems. Defragmenting an index brings the entries closer together which improves the speed at which the table information is accessed. You cannot stop a defragment request after the request has been submitted. Additionally, there are specific objects that cannot be defragmented and you cannot defragment a partition if another operation is running that conflicts with the defragment request. Tip: Before you defragment a partition: Review the important limitations and considerations on defragment requests. Run the oncheck -pt and pT to determine the number of extents for a specific table or fragment. . To defragment a table, index, or partition, run the EXECUTE FUNCTION with the defragment argument. You can specify the table name, index name, or partition number that you want to defragment. Don’t have to take the database offline, backup, then restore it somewhere else. We can just move it. Products Extent 1 EXECUTE FUNCTION ADMIN (‘DEFRAGMENT', ‘db1:customer') Customer Extent 5 3 4 Items Extent 2 Number of extents for the customer table 5 3 4 MERGE Customer Extent 1 Customer Extent 1 MERGE Page 18

19 Optimizing Tables and Indexes
19

20 Defragment Table Extents OnLine
Page 20

21 execute function task(“compress table”, “tab1”,”db”)
Data Compression Reduce the space occupied by the row Compressing a table can be done online Compress either a table or fragment Custom dictionary built for each fragment to ensure highest levels of compression Tables with compressed rows are ALWAYS variable length rows Many Benefits Smaller Archives More data in the buffer pool Fewer long/forwarded rows Few I/O for same amount of data read/written execute function task(“compress table”, “tab1”,”db”) 21

22 execute function task(“table repack”, “customer”, ”db”)
REPACK Command Customer Moves all rows in a table/fragment to the beginning, leaving all the free space at the end of the table Online operation, users can be modifying the table Tim Frank Chris Jamie Lenny execute function task(“table repack”, “customer”, ”db”) Roy Travis Steve John 22

23 execute function task(“table shrink”, “customer”, ”db”)
SHRINK Command Customer Frees the space at end of table so other table can utilize this space Entire extents are free The last extent in a table can be partially freed Will not shrink a table smaller than the first extent size New command to modify first extent size “ALTER TABLE MODIFY EXTENT SIZE” Online operation John Tim Steve Frank Travis Chris Jamie Roy Lenny execute function task(“table shrink”, “customer”, ”db”) 23

24 Automatically Optimize Data Storage
24

25 4/11/ :33 AM Index Optimization 25 Page 25 25 25

26 Create Index with a Specific Extent Size
The create index syntax has been enhanced to support the addition of an extent size for indexes Better sizing and utilization Default index extent size is the index key size / data row size * data extent size CREATE INDEX index_1 ON tab_1(col_1) EXTENT SIZE 32 NEXT SIZE 32;

27 Create Index Extent Sizes
create index cust_ix1 on customer (cust_num) in rootdbs extent size 80 next size 40 ; Ability to specify the extent size when creating an index Allow for optimal space allocation Utilities such as, dbschema, report index extent size Page 27

28 Creating Constraints without an Index
CREATE TABLE parent(c1 INT PRIMARY KEY CONSTRAINT parent_c1, c2 INT, c3 INT); CREATE TABLE child(x1 INT, x2 INT, x3 VARCHAR(32)); ALTER TABLE child ADD CONSTRAINT (FOREIGN KEY(x1) REFERENCES parent(c1) CONSTRAINT cons_child_x1 INDEX DISABLED); Saves the overhead of the index for small child tables Page 28

29 B-Tree Index The figure above one Root Node and access to the underlying twig and leave pages are through this page, which is where there can be mutex contention 29 Page 29

30 New Index Type “Forest Of Trees”
Traditional B-tree index suffer from performance issues when many concurrent users access the index Root Node contention can occur when many session are reading the same index at the same time The depth of large B-tree index increases the number of levels created, which results in more buffer reads required Forest Of Tress (FOT) reduces some of the B-tree index issues: Index is larger but often not deeper Reduces the time for index traversals to leaf nodes Index has multiple subtrees (root nodes) called buckets Reduces root node contention by enabling more concurrent users to access the index 30 Page 30 30

31 Forrest of Trees Index (FOT Index)
Reduces contentions on an indexes root node Several root nodes Some B-Tree functional is NOT supported max() and min() create index index_2 on TAB1( C1, C2 ) hash on ( C1 ) with 3 buckets;

32 FOT - Determining use - ONCHECK & SYSMASTER
Check oncheck –pT information Check sysmaster database select nhashcols, nbuckets from sysindices Average Average Level Total No. Keys Free Bytes Total There are 100 Level 1 buckets (Root Nodes)

33 Network & UDR Performance
4/11/ :33 AM Network & UDR Performance 33 Page 33 33 33

34 Network Performance Improvements
Caching network services Multiple listener threads for a single server name Multiple file descriptor servers Previous network improvements Dynamic start and stop of listener threads Pre-allocate users session 34 Page 34

35 Network Performance - Caching Network Services
Database caching of Host, Services, Users and Groups Avoids going to the operating system for each network call Administrator defined timeout value set for network caches ONCONFIG example NS_CACHE host=900,service=900,user=900,group=900 Each cache is dynamically configurable onstat –g cache prints out how effectiveness of the caches 35 Page 35

36 Network Performance – Multiple Listeners
Able to define multiple listener threads for a single DBSERVERNAME and/or DBSERVERALIAS Add the number of listeners to the end of the alias EXAMPLE To start three listener threads for the idsserver Modify the ONCONFIG as follows DBSERVERNAME idsserver-3 36 Page 36

37 Network Performance Results
My simple network performance tests 200 users connecting and disconnecting Connection throughput on an AIX server improved by 480% Connection throughput on a Linux server improved by 720% Computer Type Without Improvements Utilizing Improvements AIX 64 2m 5s 27s Linux 64 10m 11s 1m 20s 37 Page 37

38 Improve Throughput of C User Defined Routines (C-UDR)
Preloading a C-UDR shared library allows Informix threads to migrate from one VP to another during the execution of the C-UDR Increase in performance Balance workloads Without this feature The C UDR shared libraries are loaded when the UDRs are first used The thread executing the UDR is bound to the VP for the duration of the C-UDR execution PRELOAD_DLL_FILE $INFORMIXDIR/extend/test.udr PRELOAD_DLL_FILE /var/tmp/my_lib.so 38 Page 38 38

39 Verifying the C-UDR shared library is preloaded
online.log during server startup 14:23:41 Loading Module </var/tmp/test.udr> 14:23:41 The C Language Module </var/tmp/test.udr> loaded onstat –g dll new flags ‘P’ represents preloaded ‘M’ represents thread can migrate Datablades: addr slot vp baseaddr flags filename 0x4b x2a985e3000 PM /var/tmp/test.udr 0x4c2bc x2a985e3000 PM 0x4c2e x2a985e3000 PM

40 4/11/ :33 AM Update Statistics 40 Page 40 40 40

41 Agenda New Brand Name and Editions
Simplified Installation, Migration, and Portability Flexible Grid with Improved Availability Easy Embeddability Expanded Warehouse Infrastructure Empower Applications Development Enhanced Security Management Increased Performance Other Features 41 41

42 Seamless installation and Smarter configuration
Can migrate from Informix Version , 10.0, 9.40, or 7.31 directly to Informix Version 11.70 New installation application, using the new ids_install command, makes it easier to install and configure Informix products and features A typical installation now has improved default settings to quickly install all of the products and features in the software bundle, with preconfigured settings The custom installation is also smarter than before and allows you to control what is installed Both types of installations allow you can create an instance that is initialized and ready to use after installation Must use a custom installation setup if you want to configure the instance for your business needs Installing Informix and client products quickly with defaults (UNIX and Linux) You can install IBM® Informix® and all its features quickly by using the typical setup for installation. To install Informix and client products on Linux or UNIX: From a command prompt, run the installation command for the products that you want to install and specify the options for the commands. The commands are in the directory where the media files are located, referred to as media_location in this documentation. The installation application runs in console mode by default, unless you specify GUI mode when you issue the command. media_location/ids_install Installs Informix with all features, and any bundled client products that you select. Follow the instructions in the installation application. Read and accept the license to proceed with the installation. You can install into the default directory or select a different directory. Select Typical setup to install the product with all features. If the installation application notifies you that the target path is not secure, see Secure a nonsecure Informix installation path for information about how to proceed. If you are prompted for an Informix administrator password, enter a password and record it in a secure location. The installation application creates the administrator account, and you will need the password to administer the Informix installation. This user account is referred to as user informix throughout Informix products and documentation. Optional: If you want to set up a ready-to-use Informix instance as part of the installation, ensure that the Create a server instance check box is selected. The check box is selected by default in console mode, but is not selected by default in GUI mode. Also, if you want the server instance to initialize at creation, click Initialize server. Tip: Click Help in the installation application window for more information about automatic creation of a configured database server instance and automatic instance initialization.If you do not select the Create a server instance option, you can configure and initialize the database server manually after installation is complete to create a running Informix instance. Verify that the installation summary accurately reflects your installation options, and that the server has enough free space for the total installation. Go back to adjust the installation options as necessary. Complete the installation and exit the installation application. Important: See Configuring a database server to set up an instance of Informix if you did not create a server instance in the installation application.

43 Changes to Installation Commands
Some installation commands changed To take advantage of new and changed functionality To improve consistency across products and operating systems Depreciated commands installserver installclientsdk installconn Must use ids_install to install Informix with or without bundled software New uninstallids command Removes the server, any bundled software, or both To remove specific products uninstall/uninstall_server/uninstallserver uninstall/uninstall_clientsdk/uninstallclientsdk uninstall/uninstall_connect/uninstallconnect (formerly uninstallconn) uninstall/uninstall_jdbc/uninstalljdbc.exe or java -jar uninstall/uninstall_jdbc/uninstaller.jar (depending on how you install the JDBC driver) Changes to installation commandsSome installation commands changed to take advantage of new and changed functionality and to improve consistency across products and operating systems. The following commands are not available in Informix installation media: installserver, installclientsdk, installconn. You must use the ids_install command to install the database server with or without bundled software. You can still download the standalone IBM Informix Client Software Development Kit (Client SDK), IBM Informix Connect, and IBM Informix JDBC Driver media to install the client software on other computers. Use the new uninstallids command to remove the server, any bundled software, or both. You can remove specific products by using the following commands, which are in new subdirectories relative to the root directory: uninstall/uninstall_server/uninstallserver uninstall/uninstall_clientsdk/uninstallclientsdk uninstall/uninstall_connect/uninstallconnect (formerly uninstallconn) uninstall/uninstall_jdbc/uninstalljdbc.exe or java -jar uninstall/uninstall_jdbc/uninstaller.jar (depending on how you install the JDBC driver)

44 Auto-Registration and Auto VP Creation
Database extensions (formerly known as built-in DataBlade modules) are automatically registered Prerequisite tasks, such as registering the extensions or creating specialized virtual processors, no longer required The BTS, WFSVP, and MQ virtual processors are created automatically The idsxmlvp virtual processor is created automatically when an XML function is first used An sbspace is created automatically for basic text searches and spatial extensions, if a default sbspace does not exist Basic Text Search, Web Feature Service, Node, Spatial, Binary, Large Object Locator, Timeseries, MQ Messaging, and Informix web feature service now be used without first registering them in your database You can use the built-in database extensions (formerly known as built-in DataBlade modules) without performing some of the previously required prerequisite tasks, such as registering the extensions or creating specialized virtual processors. The following database extensions are automatically registered when they are first used: basic text search, node data type, binary data types, large object locator, MQ messaging, and Informix® web feature service. The BTS, WFSVP, and MQ virtual processors are created automatically. An sbspace is created automatically for basic text searches, if a default sbspace does not exist. Virtual processor is created automatically for XML publishingThe XML functions that Informix® provides run in a virtual processor named idsxmlvp. As of this release, the idsxmlvp virtual processor is created automatically when you use an XML function. The idsxmlvp virtual processor class The XML functions that IBM® Informix® provides run in a virtual processor class named idsxmlvp. The idsxmlvp virtual processor is created automatically the first time you use an XML function. If you want to increase the number of idsxmlvp virtual processors, use one of the following methods: Add the following line to your onconfig file, substituting n with the number of virtual processors you want to start, and restart the database server: VPCLASS idsxmlvp,num=n As user informix, run the following command while the database server is running, substituting n with the number of virtual processors you want to start: onmode -p +n idsxmlvp

45 dbschema and dbexport Enhancements
dbschema and dbexport utility enhancement for omitting the specification of an owner Can use the new –nw option to generate the SQL for creating an object without specifying an owner

46 Generating Storage Spaces and Logs with dbschema
SQL administration API format Can now generate the schema of storage spaces, chunks, and physical and logical logs with the dbschema utility Choose to generate: SQL administration API commands dbschema -c dbschema1.out onspaces and onparams utility commands dbschema -c –ns dbschema2.out For migrations, generate the schema before unload data using the dbexport and dbimport utilities # Dbspace 1 -- Chunk 1 EXECUTE FUNCTION TASK ('create dbspace', 'rootdbs', '/export/home/informix/data/rootdbs', '200000', '0', '2', '500', '100') # Dbspace 2 -- Chunk 2 EXECUTE FUNCTION TASK ('create dbspace', 'datadbs1', '/export/home/informix/data/datadbs', ' ', '0', '2', '100', '100') # Physical Log EXECUTE FUNCTION TASK ('alter plog', 'rootdbs', '60000') # Logical Log 1 EXECUTE FUNCTION TASK ('add log', 'rootdbs', '10000') Storage space, chunk, and log creation The dbschema -c command generates SQL administration API commands for reproducing storage spaces, chunks, logical logs, and physical logs. If you use the dbschema -c -ns command, the database server generates onspaces or onparams utility commands for reproducing storage spaces, chunks, physical logs, and logical logs. For example: Run the following command to generate a file named dbschema1.out that contains the commands for reproducing the storage spaces, chunks, physical logs, and logical logs in SQL Admin API format: dbschema -c dbschema1.out Run the following command to generate a file named dbschema2.out that contains the commands for reproducing the storage spaces, chunks, physical logs, and logical logs in onspaces and onparams utility format dbschema -c -ns dbschema2.out Optionally, specify -q before you specify -c or -c -ns to suppress the database version when you run the command. For example, specify: dbschema -q -c -ns dbschema3.out onspaces/onparams format # Dbspace 1 -- Chunk 1 onspaces -c -d rootdbs -k 2 -p /export/home/informix/data/rootdbs -o 0 -s en 500 -ef 100 # Dbspace 2 -- Chunk 2 onspaces -c -d datadbs1 -k 2 -p /export/home/informix/data/usrdbs -o 0 -s en 100 -ef 100 # Logical Log 1 onparams -a -d rootdbs -s 10000

47 Support for the IF EXISTS and IF NOT EXISTS keywords
Now you can include the IF NOT EXISTS keywords in SQL statements that create a database object (or a database) You can also include the IF EXISTS keywords in SQL statements that destroy a database object (or a database) If the condition is false, the CREATE or DROP operation has no effect, but no error is returned to the application Simplifies the migration to Informix of SQL applications that were originally developed for other database servers that support this syntax Syntax support for DDL statements with IF [NOT] EXISTS conditions Now you can include the IF NOT EXISTS keywords in SQL statements that create a database object (or a database) You can also include the IF EXISTS keywords in SQL statements that destroy a database object (or a database). If the condition is false, the CREATE or DROP operation has no effect, but no error is returned to the application. Support for the IF EXISTS and IF NOT EXISTS keywords in DDL statements simplifies the migration to Informix of SQL applications that were originally developed for other database servers that support this syntax. If you include the optional IF NOT EXISTS keywords, the database server takes no action (rather than sending an exception to the application) if a database of the specified name already exists among the databases managed by the database server instance to which you are connected. If you include the optional IF EXISTS keywords, the database server takes no action (rather than sending an exception to the application) if no XA data source type of the specified name is registered in the current database.

48 Simplified SQL syntax for Defining Database Tables
No more restrictions on the order in which column attributes can be defined in DDL statements Simplifies the syntax rules for column definitions in the CREATE TABLE and ALTER TABLE statements The specifications for default values can precede or follow any constraint definitions List of constraint definitions can also be followed (or preceded) by the default value, if a default is defined on the column The NULL or NOT NULL constraint does not need to be listed first if additional constraints are defined Simplifies the migration to Informix of SQL applications that were originally developed for other database servers that support this syntax Removing restrictions on the order in which column attributes can be defined in Data Definition Language (DDL) statements of the SQL language simplifies the syntax rules for column definitions in the CREATE TABLE and ALTER TABLE statements. The specifications for default values can precede or follow any constraint definitions. The NOT NULL constraint does not need to be listed first if additional constraints are defined. The constraints (on a single column or on a set of multiple columns) can be defined in any order within the constraint specifications, and that list of constraint definitions can be followed (or preceded) by the default value, if a default is defined on the column. In addition, the list of constraints can include the NULL keyword to indicate that the column can accept NULL values. The NULL constraint cannot be specified with NOT NULL or PRIMARY KEY in the constraint list. This support by the Informix SQL parser for table definitions written in other dialects of the SQL language can simplify migration to this Informix release of data management applications that were originally developed for other database servers. Single-Column Constraint Format Use the Single-Column Constraint format to define and declare the name of at least one constraint on a single column, and to specify the mode of each constraint. The Single-Column Constraint format can associate one or more constraints with a column, in order to perform any of the following tasks: Create one or more data-integrity constraints for a column. Specify a meaningful name for a constraint. Specify the constraint-mode that controls the behavior of a constraint during insert, delete, and update operations. The NULL constraint specifies that the column can store NULL values. It is not valid for columns of serial or complex data types. The CREATE TABLE statement fails with an error if you specify both NOT NULL and NULL constraints on the same column. The following example creates a standard table with two constraints: num, a primary-key constraint on the acc_num column; and code, a unique constraint on the acc_code column: CREATE TABLE accounts ( acc_num INTEGER PRIMARY KEY CONSTRAINT num, acc_code INTEGER UNIQUE CONSTRAINT code, acc_descr CHAR(30));

49 Stored Procedure Debugging (SPD)
Need for application developers to debug SPL procedures in Informix when necessary Should be able to execute the SPL routine line by line, stepping into nested routines, analyzing the values of the local, global and loop variables Should be able to trace the execution of SPL procedures Trace output should show the values of variables, arguments, return values, SQL and ISAM error codes Pre-requisites Informix or above Integration with the Optim Data Studio procedure debugger Integration with Microsoft Visual Studio debugger DRDA must be enabled

50 SPD - Supported Commands
Breakpoints Run Step Over Step Into (for nested procedures) Step Return Get Variable value Set Variable value

51 Explicit PDQ vs Implicit PDQ
User setting (SET PDQPRIORITY statement) All queries in current session use same setting Implicit PDQ IDS determines resource requirement based on optimizer's estimates Each query can have different PDQ setting

52 Implicit PDQ - Enable SET ENVIRONMENT IMPLICIT_PDQ ON/OFF
Enable/disable implicit PDQ for current session When enabled Informix automatically determines an appropriate PDQPRIORITY value for each query Informix ignores explicit PDQ setting unless BOUND_IMPL_PDQ is also set When disabled Informix does not override the current PDQPRIORITY setting SET ENVIRONMENT BOUND_IMPL_PDQ ON/OFF Use explicit PDQPRIORITY setting as upper bound when calculating implicit PDQ setting Optimizer estimate for each memory-consuming operator is used to calculate memory usage of each operator Maximum memory for a query is calculated based on memory-consuming operators Maximum memory for a query is used to calculate implicit PDQ setting for query Memory grant is based on implicit PDQ calculation and current resource limits MPLICIT_PDQ Environment Option Use the IMPLICIT_PDQ environment option to allow the database server to determine the amount of memory allocated to a query. Unless BOUND_IMPL_PDQ is also set, the database server ignores the current explicit setting of PDQPRIORITY. The database server does not allocate more memory, however, than is available when PDQPRIORITY is set to 100, as determined by MAX_PDQPRIORITY / 100 * DS_TOTAL_MEMORY. This environment option is OFF by default. When IMPLICIT_PDQ is set to OFF, the server does not override the current PDQPRIORITY setting. The IMPLICIT_PDQ session environment option is available only on systems that support PDQPRIORITY . If you set value between 1 and 100, the database server scales its estimate by the specified value. If you set a low value, the amount of memory assigned to the query is reduced, which might increase the amount of query-operator overflow. For example, to request the database server to determine memory allocations for queries and distribute memory among query operators according to their needs, enter the following statement: SET ENVIRONMENT IMPLICIT_PDQ ON; To require the database server to use explicit PDQPRIORITY settings as the upper bound and optional lower bound of memory that it grants to a query, set the BOUND_IMPL_PDQ environment option. Star-join query execution plans require PDQ priority to be set. Setting the IMPLICIT_PDQ session environment option to enable implicit PDQ offers an alternative. If IMPLICIT_PDQ is set to ON for the session, then a star-join execution plan will be considered without explicit setting PDQPRIORITY. The SET EXPLAIN IMPLICIT_PDQ ON statement can be issued by a sysdbopen procedure, so that users automatically enable implicit PDQ when they open the database. In this case, the query optimizer automatically considered a star join without explicit PDQPRIORITY setting by the user. The IMPLICIT_PDQ functionality for a query requires at least low level statistics on all tables in the query. If distribution statistics are missing for one or more tables in the query, the IMPLICIT_PDQ setting has no effect. This restriction also applies to star join queries, which are not supported in the case of missing statistics. BOUND_IMPL_PDQ Environment Option If IMPLICIT_PDQ is set to ON or to a value, use the BOUND_IMPL_PDQ environment option to specify that the allocated memory should be bounded by the current explicit PDQPRIORITY value or range. If IMPLICIT_PDQ is OFF, then BOUND_IMPL_PDQ is ignored. For example, you might execute the following statement to force the database server to use explicit PDQPRIORITY values as guidelines in allocating memory if the IMPLICIT_PDQ environment option has already been set: SET ENVIRONMENT BOUND_IMPL_PDQ ON; When the BOUND_IMPL_PDQ session environment option is set to ON (or to one), you require the database server to use the explicit PDQPRIORITY setting as the upper bound for memory that can be allocated to a query. If you set both IMPLICIT_PDQ and BOUND_IMPL_PDQ, then the explicit PDQPRIORITY value determines the upper limit of memory that can be allocated to a query.

53 Agenda New Brand Name and Editions
Simplified Installation, Migration, and Portability Flexible Grid with Improved Availability Easy Embeddability Expanded Warehouse Infrastructure Empower Applications Development Enhanced Security Management Increased Performance Other Features 53 53

54 Deployment Assistant (DA) – Self Configuring
Enables users to easily package snapshots of Informix instances and/or their data, in preparation for deployment In past releases a snapshot had to be manually created Built-in intelligence to capture and configure an Informix snapshot more easily Allows for reduction of the packaged instances to the user's minimum desired configuration Graphical User Interface (GUI) developed in Java/Eclipse SWT ifxdeployassist command Starts the deployment assistant interface, which prompts for the required information to capture the instance Simplified method of packaging snapshots of Informix instances, reducing the work required by DBAs to perform this task manually Deployment assistant simplifies snapshot capture and configurationIn past releases you had to manually create a snapshot. In this release you can use the built-in intelligence of the deployment assistant to capture and configure an Informix snapshot more easily. Run the ifxdeployassist command to start the deployment assistant interface, which prompts you for the required information to capture the instance. Use the -c option if you want to pass command options in a scripting environment instead of being prompted by the deployment assistant interface. You must use the interface instead of the command line if you want to capture a reduced-footprint snapshot that contains only specific features.

55 Deployment Assistant (DA) – Packages
Produces packages that are ready for use by the Deployment Utility (DU) Build a package containing Informix (Optional) pre-built database(s) (Optional) applications Compress the package without using 3rd party compression tools (BZIP2, GZIP, TAR, and ZIP) Deploy, decompress, and install the package on multiple systems Good for media distribution such as CDs Supported on Windows and Linux No current support for data on RAW devices Enhanced ability to compress and to extract compressed snapshots The deployment assistant supports the following archive formats: BZIP2, GZIP, TAR, and ZIP.The deployment utility automatically extracts snapshots that were compressed in BZIP2, GZIP, TAR, and ZIP formats. In the previous release you had to specify the -extractcmd option to extract BZIP2 and GZIP formats.

56 Deployment Assistant (DA) – Usage
To run the Deployment Assistant, run the following command in <INFORMIXDIR>/bin: ifxdeployassist On Windows, executing this command with the INFORMIXSERVER environment variable set will trigger automatic detection of the instance specified

57 Deployment Utility (DU) – New Options
IDS xC6 Available on all platforms – ifxdeploy Can deploy a pre-configured snapshots of an IDS instances on one or more machines by unzipping an archive, creating users, updating configuration files, setting file permissions, and so on Can create new instances from existing ones or from onconfig.std; or uninstall instances Chunks can be dynamically relocated to a new path at deployment time New command line options (11.70) -start option will start the Informix instance after deployment and wait for it to be initialized (equivalent to running oninit –w) Optionally add a number of seconds to wait before returning the command The Deployment Utility configuration file has a new option START -autorecommend option calculates optimal values for Informix configuration parameters based on planned usage for the instance and the host environment IDS xC6 –Available on all platforms – ifxdeploy –Can deploy a pre-configured snapshot of an IDS instance on a new machine by unzipping an archive, creating users, updating configuration files, setting file permissions, and so on. –Built-in support for zipped tar files on UNIX (.tgz) or Zip files (Windows) –Can be used as an instance manager to create new instances from existing ones or from onconfig.std –Can use a config file – ifxdeploy.conf as an alternative to command-line arguments and environment variables •Additional support for DBSERVERALIASES and customizing ONCONFIG file –Can uninstall an IDS instance and files –Multiple instances can be installed to multiple locations on a machine (all platforms) –Chunks can be dynamically relocated to a new path at deployment time A new command line option “-start” will start the IDS instance after deployment and wait for it to be initialized (equivalent to running oninit –w) Optionally add a number of seconds to wait before returning to the command prompt (same usage as oninit –w) If pre-configured dbspaces are not supplied, a new root dbspace will be created and IDS will be started with the “-i” option to initialize the disk The Deployment Utility configuration file has a new option START –Set it to the default value of 0 to not start IDS –Set the argument to a number to start IDS and wait for that number of seconds (600 is the recommended value, the same as oninit –w default behavior) –Example: START 600 Enhanced utility for deploying Informix instances In this release it is easier to use the deployment utility (ifxdeploy) to rapidly deploy a configured database server instance to multiple computers. The -start option deploys and starts an instance in a single operation so that you can silently deploy a database server. The -autorecommend option calculates optimal values for database server configuration parameters based on your planned usage for the database server and the host environment.The ifxdeploy.conf file contains new parameters so that you can run the deployment utility with fewer command-line options. -autorecommend Calculates the optimal settings for some onconfig file parameters. This option generates an alternative customized configuration file. The alternative configuration file might have suggested changes for certain parameters that do not come into effect until the instance is reinitialized. Windows: The name of the alternative configuration file is %ONCONFIG%.autorec. Linux and UNIX: The name of the alternative configuration file is $ONCONFIG.autorec

58 Deployment Utility (DU) – Example
To deploy a zipped tar file of an instance that: Prints verbose messages Sets the SERVERNUM to 2 Relocates the chunks to “/work/chunks” Sets new TCP listening ports Starts the instance after deployment export INFORMIXDIR=/work/ixdir2; export INFORMIXSERVER=ixserver2; ifxdeploy -file /work/snapshots/ifxdir.tgz -verbose -servernum 2 -relocate /work/chunks -rootpath /work/chunks -sqliport drdaport start –y To create and start a new instance using an existing INFORMIXDIR: export INFORMIXDIR=/work/ixdir; ifxdeploy -servernum 2 -sqliport drdaport start –y IDS xC6 –Available on all platforms – ifxdeploy –Can deploy a pre-configured snapshot of an IDS instance on a new machine by unzipping an archive, creating users, updating configuration files, setting file permissions, and so on. –Built-in support for zipped tar files on UNIX (.tgz) or Zip files (Windows) –Can be used as an instance manager to create new instances from existing ones or from onconfig.std –Can use a config file – ifxdeploy.conf as an alternative to command-line arguments and environment variables •Additional support for DBSERVERALIASES and customizing ONCONFIG file –Can uninstall an IDS instance and files –Multiple instances can be installed to multiple locations on a machine (all platforms) –Chunks can be dynamically relocated to a new path at deployment time A new command line option “-start” will start the IDS instance after deployment and wait for it to be initialized (equivalent to running oninit –w) Optionally add a number of seconds to wait before returning to the command prompt (same usage as oninit –w) If pre-configured dbspaces are not supplied, a new root dbspace will be created and IDS will be started with the “-i” option to initialize the disk The Deployment Utility configuration file has a new option START –Set it to the default value of 0 to not start IDS –Set the argument to a number to start IDS and wait for that number of seconds (600 is the recommended value, the same as oninit –w default behavior) –Example: START 600

59 Unique Event Alarms Informix uses the event alarm mechanism to notify the DBA about any major problems in the database server Default alarm program scripts UNIX $INFORMIXDIR/etc/alarmprogram.sh Windows %INFORMIXDIR%\etc\alarmprogram.bat ONCONFIG parameters ALARMPROGRAM SYSALARMPROGRAM

60 Unique Event Alarms - Overview
Informix has 79 Event Class IDs For each of these event alarm class, there could be multiple specific messages used by an event alarm class In previous releases, not easy differentiating between one type of event alarm vs. another for the same event alarm class Required the user to parse the specific message string which goes with the alarm program as one of its parameters Very inconvenient for applications which deeply embed IDS Panther provides unique numerical values for each specific message Applications can interpret and take actions against each event alarm In previous releases, differentiating between one type of event alarm vs. another event alarm for the same event alarm class required the user to parse the specific message string which goes with the alarm program as one of its parameters Standardize errors and alarm codes for application exception handling Out of memory, out of disk, root uninitialized, assertion failure, IDS not running, etc…

61 Programmability Enhancements
Consistent return codes for server initialization (oninit) Very helpful for application which administer Informix in deep embedded environments The application can take the appropriate action to bring the instance On-Line successfully During server initialization in embedded environments, the application may have to take actions for: Shared memory creation/initialization failed Could not find libelf/libpam/… Incorrect command line syntax Error reading/updating onconfig Error calculating defaults in onconfig Incorrect serial number Not DBSA Incorrect SQLHOSTS entries Consistent return codes for server initialization Very helpful in Informix embedded environments Take appropriate action based on return code Onint process does not generate return code other than 0 (successful) or 1/-1 (fail). Can't pinpoint the reason of failure based on return code. A customer needs programmatic access to IDS utilities in order to embed their product. Knowledge of exit codes facilitates that goal by providing a client application with error feedback. Distinct return codes for oninit. When possible, return codes should be standardized across utilities. Sample Shell Script #!/bin/sh # Execute the oninit program oninit #Get the return code from oninit execution RC=$? # Validate the retun code and take necessary action case $RC in 0) echo "RC=0: The database server was initialized successfully." ;; 1) echo "RC=1: Server initialization has failed." ;; 187) echo "RC=187: Check the entries in sqlhosts file." ;; 221) echo "RC=221: DUMPDIR missing. Creating DUMPDIR." mkdir $INFORMIXDIR/tmp chmod 770 $INFORMIXDIR/tmp ;; *) echo "Return Code=$RC !" ;; esac

62 Embeddabillity – Other Features
Automated DB Scheduler tasks added Automatic notification when IDS marks an index “bad” Automatic table storage optimization based on user settable parameters Informix Embeddability toolkit Tutorial for creating an end to end embeddability scenario Example scripts for using Deployment Assistant/Utility Install and Deployment API’s API’s to install and configure Informix from your application The purpose of this tutorial is to provide steps and scripts needed for silent end-to-end deployment of IBM® Informix® on Linux and Windows using the deployment assistant and the deployment utility. The IBM Informix Embeddability Toolkit is a logical collection of the following components: ifxdeployassist: the deployment assistant (DA). ifxdeploy: the deployment utility (DU). ifxdeploy.conf: the DU's configuration file. ifx_silent_deploy: an example script that automates silent deployment using DU. The Linux shell script and the Windows batch script for this silent deployment example are posted separately on the Technote at The following tasks are covered in this tutorial for silent deployment of Informix: Create a Snapshot for Deployment: Using the DA, archive an installed Informix server instance and its dbspaces on the template computer for future deployments. Silently Deploy Informix from the Snapshot: Using the DU, its configuration file, and ifx_silent_deploy, silently deploy a copy of the archived Informix server instance and its dbspaces on a target computer.

63 Enhanced Security Management
4/11/ :33 AM Enhanced Security Management 63 Page 63 63 63

64 Selective Row Level Auditing (SRLA)
onaudit Manages audit masks and configuration Need to be DBSSO or AAO DBSSO can perform functions related to audit setup AAO can perform functions related to audit analysis Examples onaudit –l 1 onaudit –c onaudit –a –u sqlqa –e +RDRW onshowaudit Lets AAO extract information from an audit trail Example: onshowaudit –n <servernumber> Informix® auditing can be configured so that row-level events of only selected tables are recorded in the audit trail. Selective row-level auditing can compact audit records so that they are more manageable and potentially improve database server performance. The onaudit utility supports an option (the -R flag) that can be run to enable selective row-level auditing. The CREATE TABLE and ALTER TABLE statements are used as SQL commands that flag specific tables for inclusion in the row-level audit event records. You can start selective row-level auditing either when you initially start auditing of your databases or while the auditing utility is already running. One reason to use selective row-level auditing is that it can filter out auditable events that are not important to database security. For example, an administrative user of an Informix installation with confidential data must be able to track when users perform actions on the database server that endanger the security of the system. With row-level auditing of all tables on the system, the audit record contains information about auditable events on system tables that contain reference information for database administration as well as tables that contain sensitive confidential information. If the administrator must investigate a security breach by examining the audit records, there can be large amounts of information from the system tables that hinder finding the relevant event on the tables containing the confidential data. By flagging only the security-critical tables for row-level auditing, the audit trail is parsed to a more compact set of records that is easier to analyze.

65 Selective Row Level Auditing (SRLA) – What’s New?
Previously, there was no way to enable auditing so that it excluded audit events on tables that you did not want to monitor with the onaudit utility Enabling can produce huge amounts of useless data The database system security officer (DBSSO) can now configure auditing so that row-level events are recorded for designated tables Versus for ALL tables used by the database server Ability to select only the tables that you want to audit on the row level Can improve database server performance, simplify audit trail records, and mine audit data more effectively The database system security officer (DBSSO) can configure auditing so that row-level events are recorded for designated tables, rather than for all tables used by the database server. By selecting only the tables that you want to audit on the row level, you can improve database server performance, simplify audit trail records, and mine audit data more effectively. Previously, there was no way to enable auditing so that it excluded audit events on tables that you did not want to monitor with the onaudit utility. Current Issues with masks Most of the time, you do not need row-level audit information for ALL tables Some tables are just used for reference Enabling these mnemonics produces huge amounts of useless data The information in the current row-level audit records contains table_id and row_id These can change over time So looking back at audit records can be meaningless

66 SRLA – Syntax New table level property added (AUDIT)
CREATE TABLE {existing syntax} [with AUDIT]; ALTER TABLE {existing syntax} [add AUDIT]; [drop AUDIT]; ADTROWS New parameter to Audit configuration file - adtcfg NO changes in existing row level auditing behavior (default) 1 SRLA is enabled and only "audit" enabled tables Will generate row-level audit records Pahse II to come – Add ‘AUDIT’ property to a database so that all tables created in that database inherit the AUDIT property of the database. Only DBSSO can change this setting but users with RESOURCE privilege can create/drop tables. In Phase II we would allow DBSSO to specify table columns to include in the audit record. Some limits on types/sizes. Positively identifies record where tabid/rowid can change.

67 Trusted Context – Why use it?
Trusted Context is a feature developed by DB2 Allow connection reuse under a different userid with authentication to avoid the overhead of establishing a new connection Allow connection reuse under a different userid without authentication Accommodate application servers that need to connect on behalf of an end-user but do not have access to that end-user’s password to establish a new connection on their behalf Allow users to gain additional privileges when their connection satisfies certain conditions defined at the database server

68 Trusted Context – What is it?
Database object created by the database security administrator (DBSECADM) Defines a set of properties for a connection that when met, allow that connection to be a “trusted connection” with special properties The connection must be established by a specific user The connection must come from a trusted client machine The port over which the connection is made must have the required encryption If the above criteria are met, the connection will allow changes in userid and privileges as defined in the trusted context

69 Trusted Context – Steps
Step 1: Create Trusted Context Objects Created at database level Must be created by DBSECADM before Trusted Connections can be established Can use OS users or Mapped Users Step 2: Establish Trusted Connections Must satisfy criteria defined in Trusted Context Provision to Switch User Use transactions within switched user session

70 Trusted Context – Steps
CREATE TRUSTED CONTEXT CTX1 BASED UPON CONNECTION USING SYSTEM AUTHID BOB DEFAULT ROLE MANAGER ENABLE ATTRIBUTES (ADDRESS ' ') WITH USE FOR JOE, MARY WITHOUT AUTHENTICATION Creates an Trusted Context object named CTX1 Will allow connections from Can switch to user Joe or Mary once Trusted Connection established

71 Trusted Context – Switching Users
Switch to any user defined in the Trusted Context Object scope Perform database operations Audit records will show the switched user as the originator of the operations If using transactions, commit or rollback before switching to a new user

72 Informix Mapped Users Can now configure Informix so that users no longer require operating system accounts to connect Allows users authenticated by an external authentication service (such as Kerberos or Microsoft Active Directory) to connect to Informix When a DBSA turns on the USERMAPPING parameter of the onconfig file and maps externally authenticated users to user properties in tables of the SYSUSER database Onconfig variable USERMAPPING OFF|ADMIN|BASIC A user without Host Operating System Accounts should be able to connect to Informix A DBSA should be able to grant Informix access to externally authenticated users by mapping them to the appropriate user and group privileges Regardless of whether these users have operating system accounts on the Dynamic Server host computer Previously, each user who needed to access the database server also needed an operating system account on the host computer Can now configure Informix so that users no longer require operating system accounts to connect Allows users authenticated by an external authentication service (such as Kerberos or Microsoft Active Directory) to connect to Informix USERMAPPING configuration parameter Specifies whether or not such users can access the database server, and whether any of those users can have administrative privileges Can still control which externally authenticated users are allowed to connect to Informix and their privileges Simplified administration of users without operating system accounts (UNIX, Linux)

73 Informix Mapped Users – Example
grant access to bob properties user fred; This means that when 'bob' connects to Informix, as far as the operating system access is concerned, Informix will use the UID, GID(s) and home directory for user 'fred' (which must be a user name known to the o/s) grant access to bob properties user fred, group (ifx_user), userauth (dbsa); This is similar to the previous entry. User 'bob' will use UID 3000 ('fred') and GIDs 3000 (users), 200 (staff) and the extra group 1000 (ifx_user) Additionally, assuming that USERMAPPING is set to ADMIN in the ONCONFIG file, then 'bob' will be treated as a DBSA

74 Agenda New Brand Name and Editions
Simplified Installation, Migration, and Portability Flexible Grid with Improved Availability Easy Embeddability Expanded Warehouse Infrastructure Empower Applications Development Enhanced Security Management Increased Performance Other Features 74 74

75 What is a Flexible Grid? A named set of interconnected servers for propagating commands from an authorized server to the rest of the servers in the set Useful if you have multiple servers and you often need to perform the same tasks on every server The following types of tasks are easily run through a grid: Administering servers Updating the database schema and the data Running or creating stored procedures or UDRs Managing and Maintaining replication Much like the power grid distributes electrical power throughout a region, the grid that you define distributes SQL commands to the replication servers in the grid Administering servers Examples: adding chunks or changing configuration parameter settings Updating the database schema Examples: altering tables or adding tables Running or creating stored procedures or user-defined routines Updating data Example: purging old data or updating values based on conditions Maintaining replication Enabling replication when creating a table, and altering a replication definition when altering a replicated table

76 What are the features of the new Informix Flexible Grid?
Nodes in grid do not have to be identical Different tables, different hardware, different OS’s, different IDS versions Simplify creation and maintenance of a global grid Create grid, attach to grid, detach from grid, add/drop node to/from Grid DDL/DML operations on any node propagated to all nodes in the Grid Management of grid can be done by any node in the grid Tables no longer require primary keys Integration with OpenAdmin Tool (OAT) Management of grid can be done by any node in the grid For instance add a dbspace to all nodes Tables no longer require primary keys Shadow column keys can be automatically added for tables without primary keys Grid Based Replication provides a means of replicating DDL across multiple nodes CREATE TABLE, CREATE INDEX, CREATE PROCEDURE… Grid Based Replication provides a means of replicating the execution of a statement rather than just the results of the execution Grid Based Replication provides a means of supporting the connection manager on top of ER Grid Based Replication provides the ability to automatically create ER replication as part of a CREATE TABLE, ALTER TABLE, or DROP TABLE DDL statement Grid Based Replication provides the ability to replicate data using ER without a primary key Grid Based Replication provides the ability to turn on/off ER replication within the transaction and not just at the start of the transaction.

77 Define/Enable/Disable the Grid
The GRID is managed by using the cdr utility Define Defines the nodes within the grid cdr define grid <grid_name> --all cdr define grid <grid_name> <node1 node2 …> Enable Defines the nodes within the grid which can be used to perform a grid level operation Also is used to determine which users are allowed to perform the grid operation cdr enable grid –grid=<grid_name> --user=<user> --node=<node> Disable Used to remove a node or user from being able to perform grid operations cdr disable grid –grid=<grid_name> --node=<node_name> cdr disable grid –grid=<gird_name> --user=<user_name> cdr disable grid –g <grid_name> -n <node_name> -u <user_name> OAT support enabled All nodes within the grid must be running panther to be part of the grid. Do not use the –all parameter when running in a mixed environment Enable Used to set the grid execution ruled

78 Propagating database object changes
Can make changes to database objects while connected to the grid and propagate the changes to all the servers in the grid Can propagate creating, altering, and dropping database objects to servers in the grid The grid must exist and the grid routines must be executed as an authorized user from an authorized server To propagate database object changes: Connect to the grid by running the ifx_grid_connect() procedure Run one or more SQL DDL statements Disconnect from the grid by running the ifx_grid_disconnect() procedure In order to perform DDL operations at a grid level, you must first connect to the Grid on a Grid Enabled Node and as a Grid Enabled User. This is done by executing the built-in procedure ifx_grid_connect(<gridName>, <autoRegister>, <tag>); where autoRegister is set to 1 if we want to register the DDL with ER and “tag” is an optional tag associated with any grid commands you perform. The tag can be used to make it easier to monitor the success/failure of grid operations. DDL operations will be performed on the target nodes within the Grid Within the same database as on the source By the same use as was on the source Using the same locale as on the source. The Grid connection is terminated by performing the built-in procedure ifx_grid_disconnect(); You can make changes to database objects while connected to the grid and propagate the changes to all the servers in the grid. You can propagate creating, altering, and dropping database objects to servers in the grid. For example, you can create a new database or table or change an existing database or table. You can also create stored procedures and user-defined routines. However, if you attempt to drop a table that is a participant in a replicate, you receive an error. Tip: Most DML statements should be run on a single replication server, instead of through a grid. The grid must exist and you must run the grid routines as an authorized user from an authorized server. To propagate database object changes: Connect to the grid by running the ifx_grid_connect() procedure. Run one or more SQL DDL statements. Disconnect from the grid by running the ifx_grid_disconnect() procedure.

79 Example of DDL propagation
execute procedure ifx_grid_connect(‘grid1’, ‘tag1’); create database tstdb with log; create table tab1 ( col1 int primary key, col2 int, col3 char(20)) lock mode row; create index idx1 on tab1 (col2); create procedure loadtab1(maxnum int) define tnum int; for tnum = 1 to maxnum insert into tab1 values (tnum, tnum * tnum, ‘mydata’); end for: end procedure; execute procedure ifx_grid_disconnect(); Will be executed on all nodes within the ‘grid1’ GRID

80 Monitoring a Grid cdr list grid grid1 NEW: Monitor a cluster
onstat -g cluster cdr list grid View information about server in the grid View the commands that were run on servers in the grid Without any options or a grid name, the output shows the list of grids Servers in the grid on which users are authorized to run grid commands are marked with an asterisk (*) When you add a server to the grid, any commands that were previously run through the grid have a status of PENDING for that server Options include: --source=<source_node> --summary --verbose --nacks --acks --pending Usage Use the cdr list grid command to view information about servers in the grid, and about the commands that were run on servers in the grid. If you run the cdr list grid command without any options or without a grid name, the output shows the list of grids. Servers in the grid on which users are authorized to run grid commands are marked with an asterisk (*). When you add a server to the grid, any commands that were previously run through the grid have a status of PENDING for that server. If you want to run previous grid commands on a new grid server, use the ifx_grid_redo() procedure. If you do not want to run previous grid commands on a new server, you can purge the commands by running the ifx_grid_purge() procedure. When you run an SQL administration API command, the status of the grid command does not necessarily reflect whether the SQL administration API command succeeded. The grid command can have a status of ACK even if the SQL administration API command failed. The cdr list grid command shows the return codes of the SQL administration API commands. The task() function returns a message indicating whether the command succeeded. The admin() function returns an integer which if it is a positive number indicates that the command succeeded. Long Form Short FormMeaning --acks-a Displays the servers in the grid and the commands that succeeded on one or more servers. --command=-C Displays the servers in the grid and the specified command. --nacks-n Displays the servers in the grid and the commands that failed on one or more servers. --pending-p Displays the servers in the grid and the commands that are in progress. A command can be pending because the transaction has not completed processing on the target server, the target server is down, or the target server was added to the grid after the command was run --source=-S Displays the servers in the grid and the commands that were run from the specified server. --summary-s Displays the servers in the grid and the commands that were run on the grid. --verbose-v Displays the servers in the grid, the commands that were run on the grid, and the results of the commands on each server in the grid. cdr list grid grid1

81 Informix Flexible Grid – Requirements
Enterprise Replication must be running Servers must be on Panther (11.70.xC1) Pre-panther servers within the ER domain cannot be part of the GRID Management of grid can be done by any node in the grid For instance add a dbspace to all nodes Tables no longer require primary keys Shadow column keys can be automatically added for tables without primary keys

82 Informix Flexible Grid Quickly CLONE a Primary server
Previously, to clone the Primary Create a level-0 backup Transfer the backup to the new system Restore the image Initialize the instance ifxclone utility Clones a primary server with minimal setup and configuration Starts the backup and restore processes simultaneously No need to read or write data to disk or tape Can create a standalone server or a remote standalone secondary server Add a server to a replication domain by cloning Requires the DIRECT_IO configuration parameter to be set to 0 on both the source and target servers Data is transferred from the source server to the target server over the network using encrypted SMX Connections Quickly clone a primary server You can now use the ifxclone utility to clone a primary server with minimal setup and configuration. Previously to clone a server it was necessary to create a level-0 backup, transfer the backup to the new system, restore the image, and initialize the instance. The ifxclone utility starts the backup and restore processes simultaneously and there is no need to read or write data to disk or tape. You can use the ifxclone utility to create a standalone server or a remote standalone secondary server. For example, you can quickly, easily, and securely clone a production system to a test system. The ifxclone utility requires the DIRECT_IO configuration parameter to be set to 0 on both the source and target servers. Cloning a primary server You can use the ifxclone utility to perform one-step server instantiation, allowing a primary server in a high-availability cluster to be cloned with minimum setup or configuration. You can use the ifxclone utility to create a stand-alone Informix® server or to create an RS secondary server. Using the ifxclone utility, the database administrator can quickly, easily, and securely create a clone server from a running Informix instance without needing to back up data on the source server, and transfer and restore it to the clone server. The backup and restore processes are started simultaneously using the ifxclone utility and there is no need to read or write data to disk or tape. Data is transferred from the source server to the target server over the network using encrypted Server Multiplexer Group (SMX) Connections. You can automate the creation of clone instances by calling the ifxclone utility from a script.

83 Informix Flexible Grid DDL on Secondary servers
Can now automate table management in high-availability clusters by running Data Definition Language (DDL) statements on all servers Can run most DDL statements such as CREATE, ALTER, and DROP on secondary servers In previous releases, only Data Manipulation Language (DML) statements could be run on secondary servers The dbschema and dbimport utilities are supported on secondary servers. The dbexport utility is supported only on RS secondary servers and only if both of the following conditions are true: The STOP_APPLY configuration parameter is set to a valid value other than zero, The USELASTCOMMITTED configuration parameter is set to COMMITTED READ, DIRTY READ, or ALL. Most applications that use DDL or DML can run on any of the secondary servers in a high-availability cluster; however, the following DDL statements are not supported: CREATE DATABASE (with no logging) CREATE EXTERNAL TABLE CREATE RAW TABLE CREATE TEMP TABLE (with logging) CREATE XADATASOURCE CREATE XADATASOURCE TYPE DROP XADATASOURCE DROP XADATASOURCE TYPE UPDATE STATISTICS

84 Replicate tables without primary keys
No longer require a Primary Keys for tables replicated by Enterprise Replication (ER) Use the WITH ERKEY keyword when defining tables Creates shadow columns (ifx_erkey_1, ifx_erkey_2, and ifx_erkey_3) Creates a new unique index and a unique constraint that ER uses for a primary key For most database operations, the ERKEY columns are hidden Not visible to statement like SELECT * FROM tablename; Seen in DB-Access - Table Column information Included in the number of columns (ncols) in the systables system catalog To view the contents of the ERKEY columns SELECT ifx_erkey_1, ifx_erkey_2, ifx_erkey_3 FROM customer; Example CREATE TABLE customer (id INT) WITH ERKEY; Using the WITH ERKEY Keywords Use the WITH ERKEY keywords to create the ERKEY shadow columns that Enterprise Replication uses for a primary key. The ERKEY shadow columns (ifx_erkey_1, ifx_erkey_2, and ifx_erkey_3) are visible shadow columns because they are indexed and can be viewed in system catalog tables. After you create the ERKEY shadow columns, a new unique index and a unique constraint is created on the table using these columns. Enterprise Replication uses that index instead of requiring a primary key. For most database operations, the ERKEY columns are hidden. For example, if you include the WITH ERKEY keywords when you create a table, the ERKEY columns have the following behavior: They are not returned by queries that specify an asterisk ( * ) as the projection list, as in the statement: SELECT * FROM tablename; They do appear in DB-Access when you ask for information about the columns of the table. They are included in the number of columns (ncols) in the systables system catalog table entry for tablename. To view the contents of the ERKEY columns, you must explicitly specify the columns in the projection list of a SELECT statement, as the following example shows: SELECT ifx_erkey_1, ifx_erkey_2, ifx_erkey_3 FROM customer; Example In the following example, the ERKEY shadow columns are added to the customer table: CREATE TABLE customer (id INT) WITH ERKEY;

85 Transaction Survival during Cluster Failover
Can now configure servers in a high-availability cluster environment to continue processing transactions after failover of the primary server Transactions running on secondary servers are not affected Transactions running on the secondary server that becomes the primary server are not affected Transactions running on the failed primary server are terminated Benefits Reduce application development complexity Design applications to run on any node Reduce application maintenance Reduce the application downtime of cleanup and restarting the application after a failover Transaction completion during cluster failover Active transactions on secondary servers in a high-availability cluster now run to completion if the primary server encounters a problem and fails over to a secondary server. Previous versions of Informix rolled back all active transactions when a failover occurred. You can configure servers in a high-availability cluster environment to continue processing transactions after failover of the primary server. Transactions running on any server except the failed primary server continue to run. You configure the cluster environment so that: Transactions running on secondary servers are not affected. Transactions running on the secondary server that becomes the primary server are not affected. Transactions running on the failed primary server are terminated. Transaction completion after failover is currently not supported for smart large objects, XA transactions, and when running DDL statements on secondary servers. When a failover occurs, the secondary servers in the cluster temporarily suspend running user transactions until the new primary server is running. After failover, the secondary servers re-send the saved transactions to the new primary server. The new primary server resumes execution of the transactions from the surviving secondary servers. When running distributed transactions (transactions that span multiple database servers), any transaction running on the primary server at the time of failure are terminated. Whether failover is automatic (using the Connection Manager) or manual (by specifying a server to become the new primary server), the failover server must be the server with the most advanced log replay position of all the surviving servers in the cluster. If the failover server does not have the most advanced log replay position, all transactions in the cluster are terminated and rolled back.

86 Transaction Survival – Configuration
FAILOVER_TX_TIMEOUT Maximum number of seconds the server waits before rolling back transactions after failure of the primary server Disable transaction survival (default value) > 0 Enable transaction survival, 60 seconds seems reasonable On failover node, maximum time to wait for secondary nodes to reconnect before rollback On surviving secondary node, maximum time to wait before returning error to user. (-1803/-7351). Configuring the server so that transactions complete after failover Use the FAILOVER_TX_TIMEOUT configuration parameter to configure the servers in a high-availability cluster so that transactions complete after failover. The value of FAILOVER_TX_TIMEOUT indicates the maximum number of seconds the server waits before rolling back transactions after failure of the primary server. Set FAILOVER_TX_TIMEOUT to the same value on all servers in the cluster. For example, to specify 20 seconds for transaction completion, set the value of the FAILOVER_TX_TIMEOUT configuration parameter in the onconfig file to 20. To disable transaction completion after failover, set the FAILOVER_TX_TIMEOUT configuration parameter to 0 on all servers in the cluster.

87 4/11/ :33 AM Fragmentation 87 Page 87 87 87

88 Two New Fragmentation Schemes
List Fragmentation Fragments data based on a list of discrete values Helps in logical segregation of data Useful when a table has finite set of values for the fragment key and queries on table have equality predicate on the fragment key Interval Fragmentation Fragments data based on an interval (numeric or time) value Tables have an initial set of fragments defined by a range expression When a row is inserted that does not fit in the initial range fragments, Informix automatically creates a fragment to hold the row 88 Page 88

89 (cust_id INTEGER, name VARCHAR(128), street VARCHAR(128),
List Fragmentation Fragments data based on a list of discrete values e.g. states in the country or departments in an organization Table below is fragmented on column “state” – also known as fragment key or partitioning key Fragment Key List Values CREATE TABLE customer (cust_id INTEGER, name VARCHAR(128), street VARCHAR(128), state CHAR(2), zipcode INTEGER, phone CHAR(12)) FRAGMENT BY LIST (state) PARTITION p0 VALUES ("WA","OR", "AZ") in rootdbs, PARTITION p1 VALUES ("CA") in rootdbs, PARTITION p2 VALUES (NULL) in rootdbs, PARTITION p4 REMAINDER in rootdbs; 89 Page 89 89

90 Details of Interval Fragmentation
Fragments data based on an interval (numeric or time) value Table’s initial set of fragment(s) are defined by a range expression When a row is inserted that does not fit in the initial range fragments Informix automatically creates a fragment to hold the row No exclusive access required for fragment addition No DBA intervention Purging a range can be done with a detach and drop No exclusive access is required If dbspace selected for the interval fragment is full or down, Informix will skip those dbspaces and select the next one in the list 90 Page 90 90

91 Example of Interval Fragmentation with Integers
Fragment Key Interval Expression CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR) FRAGMENT BY RANGE (order_id) INTERVAL( ) STORE IN (dbs1, dbs2, dbs3) PARTITION p0 VALUES < in rootdbs; List of DBSpaces Initial Value 91 91 Page 91 91

92 Example of Interval Fragmentation with Dates
Fragment Key Interval Expression CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR) FRAGMENT BY RANGE (order_date) INTERVAL( NUMTOYMINTERVAL(1,'MONTH')) STORE IN (dbs1, dbs2, dbs3) PARTITION p0 VALUES < DATE('01/01/2010') in rootdbs; List of DBSpaces Initial Value 92 Page 92 92

93 Usage Example of Interval Fragmentation with Dates
CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR) FRAGMENT BY RANGE (order_date) INTERVAL( NUMTOYMINTERVAL(1,'MONTH')) STORE IN (dbs1, dbs2, dbs3) PARTITION p0 VALUES < DATE('01/01/2010') in rootdbs; What happens when you insert into order_date “07/07/2010”? A new fragment is automatically allocated for the month of July and will hold values 7/1/2010 through 7/31/2010 What happens when you insert into order_date “10/10/2009”? The value is insert into the existing partition p0 93 Page 93 93

94 New Fragmentation Schemes Supported by OAT’s Schema Manager
94 Page 94

95 4/11/ :33 AM Update Statistics 95 Page 95 95 95

96 Update Statistics Improvements For Fragmented Tables
New finer granularity of statistics for fragmented tables Statistics are calculated at the individual fragment level Controlled by new table property STATLEVEL UPDATE STATITICS no longer has to scan the entire table after an ATTACH/DETACH of a fragment Using the new: UPDATE STATISTICS FOR TABLE ... [AUTO | FORCE] For extremely large tables, substantial Informix resources can be conserved by updating only the subset of fragments with stale statistics Can also specify the criteria by which stale statistics are defined Using the new STATCHANGE property

97 Update Statistics Improvements
Each table/fragment tracks the number of update, deletes and inserts New table property statchange and statlevel statchange Change percentage of a table/fragment before statistics or distributions will be updated Statlevel Specifies the granularity of distributions and statistics TABLE, FRAGMENT, AUTO Fragment level statistics and distributions Stored at the fragment level Only fragments which exceed the statchange level are re-evaluated Detaching or Attaching a fragment can adjust the table statistics without have to re-evaluated the entire table

98 Improving Update Statistics
Fragment Level Statistics When attaching a new fragment only the new fragments needs to be scanned, not the entire table Only fragments which have expired statistics are scanned Defining statistics expiration policies at the table level Detailed tracking of modification to each table and fragment Automatically skipping tables or fragments whose statistics are not expired ANSI database implicitly commit after each index/column statistics is updated 98 Page 98

99 Table Level Optimizer Statistics Policies
CREATE TABLE …. STATLEVEL [ TABLE | FRAGMENT | AUTO ] STATCHANGE STATLEVEL Defines the granularity or level of statistics created for a table STATCHANGE Percentage of the table modified before table/fragment statisics are considered expired 99 Page 99

100 Fragment Level Statistics – STATLEVEL clause
Defines the granularity of statistics created for a table TABLE Entire table is read and table level statistics stored in sysdistrib catalog FRAGMENT Each fragment has its own statistics which are stored in the new sysfragdist catalog AUTO (default option for all tables) System automatically determines the STATLEVEL FRAGMENT is chosen if: Table is fragmented by EXPRESSION, INTERVAL or LIST and Table has more than a million rows Otherwise, mapped to TABLE 100100 Page 100 100

101 Fragment Level Statistics – STATCHANGE property
Threshold applied at a table or fragment level to determine if existing statistics are considered expired Valid values for STATCHANGE is an integer between 0 and 100 Can be set for: Entire server using new ONCONFIG parameter STATCHANGE (default 10%) Session level using SET ENVIRONMENT STATCHANGE value Table level by specifying STATCHANGE property in CREATE or ALTER TABLE statement Order of precedence for STATCHANGE Table Property Session setting ONCONFIG setting Default value (10%) 101101 Page 101 101

102 New Syntax for Update Statistics
UPDATE STATISTICS FOR TABLE …. [ AUTO | FORCE ] AUTO Only tables or fragments having expired statistic are re-calculated FORCE All indexes and columns listed in the command will have their statistics re-calculated Default behavior is set by AUTO_STAT_MODE parameter Enabled by default (i.e. AUTO) 102102 Page 102 102

103 4/11/ :33 AM Data Warehouse 103103 103 103

104 WHERE c1 = 27 AND c2 BETWEEN 77 AND 88
Multi Index Scan Utilize multiple indexes in accessing a table Example, the following indexes exist on a table Index idx1 on tab1(c1) Index idx2 on tab1(c2) The server does the following: Index scans on idx1 and idx2 Combines the results Looks up rows satisfying both index scans New “Skip Scan” is used Looks like sequential scan, but only reads the required rows SELECT * FROM tab1 WHERE c1 = 27 AND c2 BETWEEN 77 AND 88 Page 104

105 OAT’s View of Multi Index Scan
Page 105

106 Star Join for Snowflake Queries
Star join is a new query processing method Improves query performance for star-schema queries Requires multi-index scan and skip scan Snowflake schema is an extension of star schema with multiple levels of dimension tables Uses bitmap technology internally for efficient removal of unnecessary data rows Uses pushdown technology 106106 Page 106

107 What is a Push Down Hash Join – How it Works!
Discard Fact table rows early Reduce number of Fact table rows accessed based on predicates on dimension tables Take advantage of multiple foreign key indexes on Fact table – rely on multi-index scan Use hashing when index is absent Uses alternate parent mechanism Requires PDQ (exchange iterator) PDHJ Rest of iterator tree pk XCHG Dim Tab Scan Fact Tab Scan 107107 Page 107 107

108 Snow Flake and Star Joins

109 Agenda New Brand Name and Editions
Simplified Installation, Migration, and Portability Flexible Grid with Improved Availability Easy Embeddability Expanded Warehouse Infrastructure Empower Applications Development Enhanced Security Management Increased Performance Other Features 109 109

110 What are Light Scans (Recap)?
A sequential scan of large tables which read pages in parallel from disk and store in private buffers in Virtual memory Advantages of light scans for sequential scans: Bypass the overhead of the buffer pool when many pages are read Prevent frequently accessed pages from being forced out of the buffer pool Transfer larger blocks of data in one I/O operation (64K/128K platform dependent) Conditions to invoke light scans: The optimizer chooses a sequential scan of the table The number of table pages > number of buffers in the buffer pool The isolation level obtains no lock or a shared lock on the table RTO_SERVER_RESTART automatically enables light scans Monitor using onstat -g scn Prevents buffer contention for the scanning of the table Reduces buffer waits The isolation level obtains no lock or a shared lock on the table: Dirty Read (including nonlogging databases) isolation level Repeatable Read isolation level if the table has a shared or exclusive lock Committed Read isolation if the table has a shared lock The database server does not perform light scans on system catalog tables. Light scans do not occur under Cursor Stability isolation. Light Scan - The light scan is a mechanism that bypasses the traditional reading process. Pages are read from disk and put in the buffer cache in the resident shared memory segment. The light scan reads the page from disk and puts it in the light_scan_buffers, which reside in the virtual segment of the shared memory. It then reads the data in parallel, providing a significant increase in performance when compared with scanning large tables. The number of light scan buffers is defined by the following equation: light_scan_buffers = roundup((RA_PAGES + RA_THRESHOLD)/ (MAXAIOSIZE/PAGESIZE)) Note: MAXAIOSIZE is an Informix internal parameter, and it is platform dependent. In general, it is in the area of about 8 pages. As you can see the RA_PAGES and RA_THRESHOLD can impact the number of light scan buffers, and they cannot be changed dynamically. What you can consider is to create dbspaces dedicated to the DSS activity, giving them a larger pagesize. When increasing the PAGESIZE, IDS will increase the number of light scan buffers, see the formula above. The page size must be a multiple of the operating system page size, but not greater than 16 kilobytes (KB). Put attention on the size of your row, each page can contain maximum 255 rows so if the row size is small and the page size is large you can risk to lose disk space. To know the maximum row size use the following command: onstat -g scn command: Print scan information Use the onstat -g scn command to display the status of a current scan and information on the scan. The onstat -g scn command also identifies whether a scan is a light or bufferpool scan. If you have a long-running scan, you might want to use this command to check the progress of the scan, to determine how long the scan will take before it completes, and to see whether the scan is a light scan or a bufferpool scan. Scan Type Either: Bufferpool Light (light scan) . Lock Mode The type of acquired lock or no lock: Table (table-level lock acquired) Slock (share locks acquired) Ulock (update locks acquired) blank (no locks acquired) This column can also show one of the following values: +Test (The scan tested for a conflict with the specified lock type; the lock was not acquired.) +Keep (The acquired locks will be held until end of session instead of the end of the transaction.) Notes This column can show one of the following values: Look aside The light scan is performing look aside. The light scan reads blocks of pages directly from disk into large buffers, rather than getting each page from the buffer manager. In some cases, this process requires the light scan to check the buffer pool for the presence of each data page that it processes from one of its large buffers; this process is called look aside. If the page is currently in the buffer pool, the light scan will use that copy instead of the one in the light scan large buffer. If the page is not in the buffer pool, the light scan will use the copy that the light scan read from disk into its large buffer. If the light scan is performing look aside, the performance of the scan is slightly reduced. In many cases, the light scan can detect that it is impossible for the buffer pool to have a newer version of the page. In these situations, the light scan will not check the buffer pool, and the look aside note will be absent. Forward row lookup The server is performing a light scan on a table that has rows that span pages. The light scan must access and use the buffer pool to get the remainder pieces of any rows that are not completely on the home page.

111 Light Scan Support for All Data Types (11.50.xC6)
Can now enable Informix to perform light scans on: VARCHAR, LVARCHAR, NVARCHAR Compressed tables Any table with rows larger than a page Tables now only have to be greater than 1 MB in size Versus greater than the size of the BUFFERPOOL Light Scan for fixed length rows already enabled Enable: Environment: export IFX_BATCHEDREAD_TABLE=1 ONCONFIG file: BATCHEDREAD_TABLE 1 Session: SET ENVIRONMENT IFX_BATCHEDREAD_TABLE ‘1’; The optimizer chooses a sequential scan AND The number of pages in the table is greater than the number of Buffers in the BUFFERPOOL The isolation level of the session obtains no locks, or shared locks The table contains no varchar, nvarchar or lvarchar data. The table is not compressed The individual row size is less than the page size SO ONLY A LIMITED NUMBER OF TABLES COULD USE THIS The server does not scan pieces of a row (such as smart large objects) that are stored outside of the row In a Light Scan, the data read from the table is stored in its own private buffer pool, with the understanding that there will be no need to retain the data for further operations. There is an inherent implication that the entire table, or at least an entire fragment of the table, will be read. With a Light Scan, the engine doesn't flood the buffer pool with the contents of a large table and avoids the overhead of buffer management. A Light Scan can be significantly faster than a traditional scan. Skips using the bufferpool for sequential scans Does I/O in large blocks - more efficient Does not go through the buffer pool so a light scan will not kick out any pages already there Code path optimizations: Internally pass rows in batches instead of individually Some filtering pushed down closer to the data Streamlined code With Read Ahead turned on, then performance improvement would be in the 10% range Without Read ahead turned on, i.e. in an OLTP environment, then it could easily be 50% improvement. Onconfig param is set for Read Ahead. Mixed workload enhancements coming in Panther. Check using onstat –g lsc Performance advantages of using light scans Transfers larger blocks of data in one I/O operation Bypasses the overhead of the buffer pool when many pages are read Prevents frequently accessed pages from being forced out of the buffer pool when many sequential pages are read for a single DSS query Light scans occur under the following conditions: The optimizer chooses a sequential scan of the table. The number of pages in the table is greater than the number of buffers in the buffer pool. The isolation level obtains no lock or a shared lock on the table: descriptor (decimal) Light scan ID address (hex) Memory address of the light scan descriptor next_lpage (hex) Next logical page address to scan next_ppage (hex) Next physical page address to scan ppage_left (decimal) Number of physical pages left to scan in the current extent #bufcnt (decimal) Number of light scan buffers used for this light scan #look_aside (char) Whether look aside is needed for this light scan (Y = yes, N = no). Look asides occur when a thread needs to examine the buffer pool for existing pages to obtain the latest image of a page being light scanned. These larger I/O blocks are usually 64 kilobytes or 128 kilobytes. To determine the I/O block size that your platform supports, see your machine notes file. Light scans do not occur under Cursor Stability isolation. When the database server is using the RTO_SERVER_RESTART configuration parameter, light scans automatically set a flag for the next checkpoint to activate. Dirty Read (including nonlogging databases) isolation level Repeatable Read isolation level if the table has a shared or exclusive lock Committed Read isolation if the table has a shared lock The ppage_left column will display the number of pages still to scan for a given fragment of a table

112 Light Scan Support for All Data Types (11.70.xC1)
Automatic light scans on tables Informix now automatically performs light scans when appropriate No longer have to set configuration parameters to enable Informix to perform these scans New BATCHEDREAD_INDEX configuration parameter Enables the optimizer to automatically fetch a set of keys from an index buffer Reduces the number of times a buffer is read BATCHEDREAD_INDEX configuration parameter Use the BATCHEDREAD_INDEX configuration parameter to enable the optimizer to automatically fetch a set of keys from an index buffer. This reduces the number of times that a buffer is read, thus improving performance. onconfig.std value units Integer default value 1 range of values 0 or 1 takes effect When the database server is stopped and restarted To disable these light scans, set the values of the BATCHEDREAD_TABLE and BATCHEDREAD_INDEX configuration parameters to 0.

113 Light Scan Support for All Data Types (11.70.xC1)
onstat -g lsc Displays information based on pages scanned of large data tables, when the BATCHEDREAD_TABLE configuration parameter or the IFX_BATCHEDREAD_TABLE environment option is not enabled Note: this is depreciated onstat -g scn Displays the status of all light scans starting in FC6 The above output is from a parallel query operating across a table with 4 fragments, showing that you can monitor the progress of each individual scan thread. RSAM batch sequential scan info SesID Thread Partnum Rowid Rows Scan'd Scan Type Lock Mode Notes c Buffpool Slock+Test c Buffpool Slock+Test c Buffpool Slock+Test c Buffpool Slock+Test

114 Other Performance Enhancements
Automated DB Scheduler tasks added to help with Performance Timeout users that have been idle for too long” Automatically allocate CPU VPs to match hardware/licensing when IDS starts Alerts for tables that have outstanding in-place Ability to configure the automatic compressing, shrinking, repacking, and defragmenting of tables and extents Large Page Support on Linux Previously, only AIX and Solaris systems were supported The use of large pages can provide performance benefits in large memory configurations

115 Agenda New Brand Name and Editions
Simplified Installation, Migration, and Portability Flexible Grid with Improved Availability Easy Embeddability Expanded Warehouse Infrastructure Empower Applications Development Enhanced Security Management Increased Performance Other Features 115 115

116 Prevent accidental disk initialization of an instance
FULL_DISK_INIT configuration parameter Specifies whether or not the disk initialization command (oninit -i) be executed in an Informix instance when a page zero exists at the root path location Prevents accidental initialization of an instance or another instance when the first page of the first chunk (page zero) exists at the root path location Page zero, which is created when Informix is initialized, is the system page that contains general information about the server Values The oninit -i command runs only if there is not a page zero at the root path location 1 The oninit -i command runs under all circumstances Also resets the FULL_DISK_INIT configuration parameter to 0 after the disk initialization Prevent the accidental disk initialization of your instance or another instance You can use the new FULL_DISK_INIT configuration parameter to prevent the major problems that can occur if you or someone else accidentally initializes your instance or another instance when the first page of the first chunk (page zero) exists at the root path location. Page zero, which is created when Informix is initialized, is the system page that contains general information about the server. The FULL_DISK_INIT configuration parameter specifies whether or not the disk initialization command (oninit -i) can run on your Informix instance when a page zero exists at the root path location. When this configuration parameter is set to 0, the oninit -i command runs only if there is not a page zero at the root path location. If you change the setting of the FULL_DISK_INIT configuration parameter to 1, the oninit -i command runs under all circumstances, but also resets the FULL_DISK_INIT configuration parameter to 0 after the disk initialization.

117 Tool for collecting data for specific problems
New ifxcollect tool to collect diagnostic data if necessary for troubleshooting a specific problem Such as an assertion failure Can also specify options for transmitting the collected data via the File Transfer Protocol (FTP) Located in the $INFORMIXDIR/bin directory Output files located in the $INFORMIXDIR/isa/data directory Examples To collect information for a general assertion failure ifxcollect –c af –s general To collect information for a performance problem related to CPU utilization ifxcollect –c performance –s cpu To include FTP information, specify the additional information -f -e -p -f -m machine -l /tmp -u user_name -w password ifxcollect tool for collecting data for specific problemsYou can use the new ifxcollect tool to collect diagnostic data if necessary for troubleshooting a specific problem, such as an assertion failure. The ifxcollect tool is in the $INFORMIXDIR/bin directory. Output files that ifxcollect commands generate are in the $INFORMIXDIR/isa/data directory. Collecting data with the ifxcollect tool You can use the ifxcollect tool to collect diagnostic data if necessary for troubleshooting a specific problem, such as an assertion failure. You can also specify options for transmitting the collected data via the File Transfer Protocol (FTP). . The ifxcollect tool resides in the $NFORMIXDIR/bin directory. Output files that ifxcollect commands generate reside in the $INFORMIXDIR/isa/data directory. Examples To collect information for a general assertion failure, run this command: ifxcollect –c af –s general To collect information for a performance problem related to CPU utilization, run this command: ifxcollect –c performance –s cpu To include FTP information, specify the additional information as shown in this example:-f -e -p f -m machine -l /tmp -u user_name -w password

118 Backup to Cloud – Overview
Support for backup of Informix data to Amazon Simple Storage Service (S3) cloud storage system and restore from it by using ontape backup and restore utility Benefits Simplifies the process of Informix data backup to an off-site S3 storage location, which can be accessed from anywhere on the web Scalable storage capacity to match the growth in Informix backup data (within backup object size limit imposed by S3) Reliable storage system through SLA provided by S3 Pay-as-you-go model can provide cost-effective Informix backup solution

119 Backup to Cloud – How Backup works?
ontape backs up the data to a file in local directory ontape starts the Cloud Client and waits for it to finish The Cloud Client transfers the backup file from local directory to S3 The Cloud Client returns its execution status back to ontape, for ontape to finish running The Cloud Client retrieves backup file from S3 into local directory ontape restores the server from the local file

120 Websphere MQ

121 Order Entry Application
Shipping Application Websphere MQ Informix Dynamic Server recvdnotify Inventory shippingreq MQ Functions Transaction mgmt creditque Credit processing Functions to: Send Receive Publish Subscribe Abstract Use a virtual table to map a queue to a table Send and receive via INSERT and SELECT Send strings, documents (CLOB/BLOB) Use SQL callable routines to exchange data with MQ begin work INSERT into tasktab values (MQREAD(“Myservice”)); SELECT MQSEND(“UService”, order || “:“ || address) FROM tab where cno = 12345; COMMIT WORK; -- or ROLLBACK WORK Transactional protection Functions to: Send Receive Publish Subscribe Abstract Use a virtual table to map a queue to a table Send and receive via INSERT and SELECT Send strings, documents (CLOB/BLOB) Prior to IDS Panther (11.50 and earlier) Order Entry Application Simplified Interface SQL based program SQL based MQ access 2-phase commit

122 Order Entry Application
Shipping Application Oracle Websphere MQ shippingreq DB2 Websphere MQ shippingreq Inventory Websphere MQ shippingreq IDS IDS Credit processing Support distributed topology for IDS and Websphere MQ Server based MQ messaging Client based MQ messaging Support multiple Queue Managers within a single transaction. Query attributes of the queue manager or queue MQInquire() Query if a message is available MQHasMessage() Support FIRST and SKIP options for SELECT on MQ tables Provide write-only MQ tables. MQCreateVtiReceive() MQ messaging enhancements Applications can send and receive messages from local or remote queue managers that reside anywhere in the network and participate in a transaction. There is no limit to the number of queue managers that can participate in a transaction. MQ messaging includes these new functions: MQHasMessage(): Checks if there is a message in the queue MQInquire(): Queries for attributes of the queue MQCreateVtiWrite(): Creates a table and maps it to a queue that is managed by WebSphere® MQ These enhancements simplify administrative tasks and reduce the number of WebSphere MQ licenses that are needed. As of this release, MQ messaging is supported on Linux 64 bit operating systems that run on zSeries® hardware platforms. Support distributed topology for IDS and Websphere MQ Server based MQ messaging Client based MQ Support multiple Queue Managers within a single transactio New Functions WITH MQ Enhancements in Informix Order Entry Application Simplified Interface SQL based program SQL based MQ access 2-phase commit

123 123 123


Download ppt "Fragmentation / Partitioning"

Similar presentations


Ads by Google