Presentation is loading. Please wait.

Presentation is loading. Please wait.

IDS 12 - Replication Enhancements Scott Pickett & Carlton Doe WW & NA Informix Technical Sales For questions about this presentation contact:

Similar presentations


Presentation on theme: "IDS 12 - Replication Enhancements Scott Pickett & Carlton Doe WW & NA Informix Technical Sales For questions about this presentation contact:"— Presentation transcript:

1 IDS 12 - Replication Enhancements Scott Pickett & Carlton Doe WW & NA Informix Technical Sales For questions about this presentation contact: spickett@us.ibm.comspickett@us.ibm.com © 2014 IBM Corporation

2 Agenda  Connection Manager Network Monitoring and Failover  Shared Disk Alternative Communications  Enterprise Replication –Queue Monitoring –Storage Provisioning –Send-only Replicates  Flexible Grid –Object Replication –Grid Queries –Deferred DDL Propagation © 2014 IBM Corporation2

3 Connection Manager Network Monitoring & Failover © 2014 IBM Corporation

4 Network Based Failover  Consider the following condition –App servers App_1 and App_2 are hosting applications that require a connection to the cluster primary due to the fact that they are using repeatable read isolation level PrimaryShared Disk App_1 App_2 © 2014 IBM Corporation4

5 Network Based Failover  Consider the following condition (cont.) –App server App_1 loses its network connection to the cluster primary. Under current Connection Manager rules, no cluster failover would occur since both instances are operational. However App_1 is now blocked because it can’t use the SD secondary PrimaryShared Disk App_1 App_2 © 2014 IBM Corporation5

6 Network Based Failover  Now, Connection Managers can execute a cluster failover if an high priority app server is unable to connect to the cluster primary  Requirements –Each app server must be running on its own physical server or LPAR –Each app server must be running its own Connection Manager agent –The SLA must be configured to use PRIMARY only in its definition Will be the case with repeatable read isolation level applications –Two new keywords need to be added to the Connection Manager configuration file LOCAL_IP – the list of local network addresses on the server the Connection Manager agent should monitor for potential failures PRIORITY – the relative importance of that app server vis-à-vis the other app servers using the cluster  This will determine which app server CM agent can initiate a failover relative to the other app servers © 2014 IBM Corporation6

7 Network Based Failover  What happens –When App_1 loses connection to Primary, because its CM has a higher PRIORITY value than App_2, the App_1 CM will invoke a cluster primary failover to the other instance Primary App_1 App_2 Priority 1 Priority 2 © 2014 IBM Corporation7

8 Network Based Failover  However –If the PRIORITY values are reversed and App_1 loses its connection, because App_1 has a lower PRIORITY setting, no failover will occur and services from App_1 will hang PrimaryShared Disk App_1 App_2 Priority 2 Priority 1 © 2014 IBM Corporation8

9 Network Based Failover  The new configuration parameters as set as follows –LOCAL_IP – in the CM header –PRIORITY – in the FOC section NAME my_test_cm MACRO my_macro (inst_1, inst_12, inst_33) LOCAL_IP 192.168.199.123, 192.168.199.124 LOGFILE /opt/IBM/informix/logs/mycm_log …… CLUSTER test_cluster # cluster name { INFORMIXSERVER inst_1 SLA oltp DBSERVERS=primary SLA payroll DBSERVERS=HDR,primary SLA report DBSERVERS=SDS,HDR ………… FOC ORDER=inst_2,inst_4 PRIORITY=2 TIMEOUT=10 RETRY=1 } © 2014 IBM Corporation9

10 Questions © 2014 IBM Corporation10

11 Shared Disk © 2014 IBM Corporation

12 Shared Disk  Consider this condition –The network connection between the cluster primary and one or more SD secondary instances is lost Shared Disk 2 for example Shared Disk 1Shared Disk 2Shared Disk 3 Primary © 2014 IBM Corporation12

13 Shared Disk  Historically, this would prevent Shared Disk 2 from participating in database operations since it couldn’t synchronize checkpoints and exchange other information with the cluster primary  Informix 12.10 introduces an alternate communication mechanism for SD secondary instances in the event of a network connection failure to the primary –A drop-box approach – messages are written to a common smart BLOBspace that the primary and SD secondary instances can read © 2014 IBM Corporation13

14 Shared Disk  Controlled by the SDS_ALTERNATIVE parameter –Must be set to a single smart BLOBspace –If it is set, the cluster will use it in the event of a network failure If not set, instance behavior will not change  The BLOBspace will be used to communicate a wide range of messages including a shutdown to the primary if a failover is being initiated –For example, from a app-server-based CM with a high PRIORITY value which has also lost connectivity to the primary © 2014 IBM Corporation14

15 Questions © 2014 IBM Corporation15

16 Enterprise Replication – Check Queues © 2014 IBM Corporation

17 ER Check Queues  One of the great things about ER and Flexible Grid is its asynchronous nature –No impact on application throughput waiting for responses from other instances –Each instance is completely independent and able to have any combination of replication rules and replicated data  One of the biggest administrative issues with ER and Flexible Grid is its asynchronous nature –While data delivery is guaranteed, there is no absolute time limit to when it will occur –Because each instance is completely independent, it is possible for different instances to be at different levels of application of data and administrative changes © 2014 IBM Corporation17

18 ER Check Queues  Historically, the only way to monitor data changes in the cluster was to execute a series of queries to check the number of rows and/or their values  This could be problematic, especially if an administrator: –Needs to define and start a replicate definition on a table then immediately perform an initial synchronization The sync task cannot be started until the replicate definition is propagated to all of the ER servers.  Solution is to manually check each instance’s replicate definitions and their status –Needs to confirm that all committed user transactions have been replicated to all ER target servers before starting a reporting application on one or more ER servers Solution is to manually query each server for number of rows and values © 2014 IBM Corporation18

19 ER Check Queues  Administrators can now use the cdr check queue command to determine if operations are still queued and waiting for execution on one or more nodes –Supports reporting on the control, send and receive queues  When executed against the control and receive queues –The operation waits to return until all operations in the queue at the time the operation was executed have been processed In other words, applied on the instance  When executed against the send queue –The operation waits for all active transactions at the time the operation was executed to finish their work, populate the send queue, get pushed to targets then deleted from the send queue –In other words, the in-flight transactions have been sent to the targets for application there © 2014 IBM Corporation19

20 ER Check Queues  Depending on the depth of the monitored queue(s), waiting for the command to return that the queue(s) are clear can take some time –A “wait” flag is available: set to a time interval in seconds | minutes | hours up to which the command returns with a list of cleared and uncleared queue(s) If the queues clear before that time, the command will return immediately –All instances at the same level can be monitored in parallel if desired  All root instances or all non-root instances Instances at different levels can not be monitored as a set  For example, a root and an non-root instance –When checking a leaf instance, the command will execute against the leaf instance’s parent instance © 2014 IBM Corporation20

21 ER Check Queues  Syntax for the command –wait values  default time value is in minutes 0 – return a result immediately, do not wait for queues to clear -1 – wait until all queue(s) are cleared before returning N – positive integer value of time in seconds, minutes, hours to wait before returning a result  One of more queues could still be full at this time © 2014 IBM Corporation21

22 ER Check Queues  Syntax for the command (cont.) –With the connect and target options, the command can be executed against all instances or a specifically named sub-set of the ER / Grid cluster © 2014 IBM Corporation22

23 ER Check Queues  Syntax for the command (cont.) –A successful execution will return a 0 (zero) code –A failed execution, (queue(s) are still full) can return one of the following codes: 5, 17, 21, 48, 62, 94, 99, 100, 196, 222 © 2014 IBM Corporation23

24 ER Check Queues  Examples –Check the send queue on inst_1, wait up to 10 minutes to complete Queue cleared in that time frame and command returned successfully cdr check queue –q sendq –w 10m g_inst_1 Checking sendq queue status for server g_inst_1... sendq queue status for g_inst_1 as of Wed Jan 9 13:03:19 2013: COMPLETE © 2014 IBM Corporation24

25 ER Check Queues  Examples (cont.) –Check the receive queue on all instances, wait up to 5 seconds to complete Queues did NOT clear in that time frame and command returned an error cdr check queue –q recvq –w 5s –a Checking recvq queue status for server g_inst_1... recvq queue status for g_inst_1 as of Wed Jan 9 13:08:19 2013: COMPLETE Checking recvq queue status for server g_inst_2... recvq queue status for g_inst_2 as of Wed Jan 9 13:08:19 2013: COMPLETE Checking recvq queue status for server g_inst_3... recvq queue status for g_inst_3 as of Wed Jan 9 13:08:19 2013: INCOMPLETE Operation timed out. command failed -- Command timed out. (21) © 2014 IBM Corporation25

26 Questions © 2014 IBM Corporation26

27 Enterprise Replication and Storage Provisioning © 2014 IBM Corporation

28 ER and Automatic Storage Provisioning  ER and Flexible Grid have a number of prerequisites that must be met before an ER / Grid cluster is instantiated –Includes Changes to $SQLHOSTS  To support communication to server groups rather than individual instances Parameters set in $ONCONFIG, not all have to be configured however  Includes CDR_EVALTHREADS,CDR_DSLOCKWAIT,CDR_QUEUEMEM, CDR_NIFCOMPRESS, CDR_SERIAL, CDR_DBSPACE, CDR_QHDR_DBSPACE, CDR_QDATA_SBSPACE, CDR_SUPPRESS_ATSRISWARN, CDR_DELAY_PURGE_DTC, CDR_LOG_LAG_ACTION, CDR_LOG_STAGING_MAXSIZE, CDR_MAX_DYNAMIC_LOGS, GRIDCOPY_DIR, CDR_TSINSTANCEID, CDR_MAX_FLUSH_SIZE Creation of at least one dbspace and SLOBspace for queues and other ER/Grid storage requirements  Configured via the CDR_QDATA_SBSPACE and CDR_DBSPACE parameters  Can also create a separate space for the CDR_QHDR_DBSPACE if you don’t want to use the rootdbs © 2014 IBM Corporation28

29 ER and Automatic Storage Provisioning  Failure to create the spaces and configure the $ONCONFIG is one of the biggest stumbling blocks for new ER/Grid administrators  In Informix 12.10, if the spaces have not been created but there is sufficient space in the instance’s storage pool, the cdr define server command will automatically –Use space from the pool to create the required spaces –Dynamically configure the CDR_QDATA_SBSPACE and CDR_DBSPACE parameters  Obviously, this requires the configuration and population of an instance storage pool –Informix 11.7 technology –Minimum space requirements for the spaces 500 MB for CDR_QDATA_SBSPACE 100 MB for CDR_DBSPACE © 2014 IBM Corporation29

30 ER and Automatic Storage Provisioning  Other $ONCONFIG parameter settings will have to be set via the cdr change onconfig command as needed –For example Inst_5: cdr change onconfig “CDR_SERIAL 2,6” © 2014 IBM Corporation30

31 Questions © 2014 IBM Corporation31

32 Enterprise Replication – Send Only Replicates © 2014 IBM Corporation

33 ER Send-Only Replicates  Consider these two replication use cases Master price list Distribution center / store Corporate headquarters Branch office Price change distribution throughout the organization Daily / Weekly activity consolidation © 2014 IBM Corporation33

34 ER Send-Only Replicates  In earlier versions of Informix, there were two options for creating replicates to support these business use-cases –Create a separate replicate for each pair of instances A lot of work and easy to get out of sync as cluster grows or is changed –Create a single, master replicate with participant rules Rules that specify which instance is fully updatable and which is receive only Only works for data distribution, not consolidation >For consolidation, three replicate definitions are required, one for each instance In this case g_er1 has a P participant qualifier indicating the replicate is R / W in this instance g_er2 and g_er3 have a R participant qualifier indicating the replicate is receive-only in these instances cdr define repl --master=g_er1 -u -c g_er1 -C always -f y price_book \ "P stores@g_er1:informix.my_prices" "select * from my_prices where prod_code > 500" \ "R stores@g_er2:informix.my_prices" "select * from my_prices" \ "R stores@g_er3:informix.my_prices" "select * from my_prices" © 2014 IBM Corporation34

35 ER Send-Only Replicates  But what if you want instances to send / aggregate data changes but not receive updates? –The P option allows both  Informix 12.10 introduces the S (send-only) participant type –Can be used in the cdr define replicate command In this case  g_er1 has a S participant qualifier indicating the replicate can only send data changes from this replicate, it can not receive updates  g_er2 and g_er3 don’t have a participant qualifier indicating the replicate is R/W in these instances cdr define repl -c g_er1 -C always -f y price_book \ “S stores@g_er1:informix.my_prices" "select * from my_prices where prod_code > 500" \ “stores@g_er2:informix.my_prices" "select * from my_prices" \ “stores@g_er3:informix.my_prices" "select * from my_prices" © 2014 IBM Corporation35

36 ER Send-Only Replicates  Multiple send-only participants for the replicate can be defined within a single replicate definition statement In this case The g_er2 and g_er3 instances are sending their activity up to g_er1  With this new functionality, only one replicate definition is required on one node –The send-only replicates will automatically be created on the other nodes cdr define repl -u -c g_er1 –C timestamp -f y district_claims_processing \ stores@g_er1:informix.district_claims_processing "select * from district_claims_processing" \ “S stores@g_er2:informix. claims_processing" "select * from claims_processing" \ “S stores@g_er3:informix. claims_processing" "select * from claims_processing” © 2014 IBM Corporation36

37 ER Send-Only Replicates  There are other ways to create send only replicates –A new mode flag when realizing templates Define the template like always then when realizing it, add the mode flag –Change the replication mode of the entire instance with the cdr modify server command cdr realize template my_template --mode=sendonly cdr modify server --mode=sendonly © 2014 IBM Corporation37

38 Questions © 2014 IBM Corporation38

39 Flexible Grid – Object Replication © 2014 IBM Corporation

40 Flexible Grid and Flat File Transfers  Flexible Grid was introduced with Informix 11.70. It provides the ability to –Create small to massively sized grids easily –Replicate data without a primary key**** –Create ER replication as part of a create table DDL statement –Replicate DDL statements across multiple nodes create table, create index, create procedure.... –Make instance changes across all members of the Grid Add / drop logical logs, chunks, dbspaces, update $ONCONFIG, etc. –Support the oncmsm connection agent against Grid clusters –Replicate the execution of a statement rather than just the results of the statement executed somewhere else**** Helpful if you have triggers that execute locally on each node –Turn ER replication on or off within the transaction and not just at the start of the transaction**** **** Has data consistency implications © 2014 IBM Corporation40

41 Flexible Grid and Flat File Transfers  But duplicating instances into a cluster or administering a cluster is more than just executing DDL or instance configuration operations and replicating data –There are UDRs, shell scripts, program executables and more that might need to be propagated throughout the cluster For example, what about updating the $SQLHOSTS file??  Informix 12.10 introduces the capability of replicating flat files to instances in a cluster –Uses the ifx_grid_copy() function to read from / write to a specific directory on servers Configured with the GRIDCOPY_DIR parameter © 2014 IBM Corporation41

42 Flexible Grid and Flat File Transfers  The GRIDCOPY_DIR must be $INFORMIXDIR or subdirectory thereof  GRIDCOPY_DIR does not have to be identical across the cluster  The ifx_grid_copy() operation can specify different source and target locations for the files –Can read from $INFORMIXDIR/scripts_repository but send to $INFORMIXDIR/custom_scripts on all the targets © 2014 IBM Corporation42

43 Flexible Grid and Flat File Transfers  Syntax for the ifx_grid_copy() function –The parent directory for the source and target is assumed to be the value of GRIDCOPY_DIR –If a target location is not specified, the file will be copied to the same location as on the source –If the target location directory does not exist, it will be created by the copy operation –The file name can be changed as part of the copy process For example, source file is called my_source but when copied it is renamed to new_file execute function ifx_grid_copy (‘grid_name’, ‘path_and_source_file’ [,‘path_and_target_file’] ) © 2014 IBM Corporation43

44 Flexible Grid and Flat File Transfers  For example –If source GRIDCOPY_DIR = my_stuff and target GRIDCOPY_DIR = custom_scripts –The operation will copy $INFORMIXDIR/my_stuff/my_new_shell.sh to $INFORMIXDIR/custom_scripts/my_new_shell.sh on the target execute function ifx_grid_copy (‘my_test_grid’, ‘my_new_shell.sh’) © 2014 IBM Corporation44

45 Flexible Grid and Flat File Transfers  For example (cont.) –If source GRIDCOPY_DIR = my_stuff and target GRIDCOPY_DIR is not configured –The operation will copy $INFORMIXDIR/my_stuff/source_file.sh to $INFORMIXDIR/yoyo/new_file.sh on the target Notice the file name change execute function ifx_grid_copy (‘my_test_grid’, ‘source_file.sh’, ‘yoyo/new_file.sh) © 2014 IBM Corporation45

46 Questions © 2014 IBM Corporation46

47 Grid Queries © 2014 IBM Corporation

48 SQL Enhancements – Grid Queries  In a Flexible Grid environment, it is possible for a cluster of instances to contain the same dbspaces and tables but no replicated data –Common in retail and or distribution environments where stores / warehouses have different SKUs (products) and inventory levels  Currently, to gather data from multiple instances requires executing multiple SQL operations into a temporary table or joining the results  Informix 12.10 introduces the grid query operation which, when executed from an administrative node, will gather data from the requested nodes with a single SQL statement © 2014 IBM Corporation48

49 SQL Enhancements – Grid Queries  Grid queries can be executed against the entire cluster or against a subset of the instances –These subsets are defined as “regions” but have nothing to do with geography  The cdr define region command is used to define regions –Syntax region_name must be unique across all clusters Will be used by SQL statements and cdr commands so have to be able to resolve to the correct region list_of_instances is a whitespace separated list of server group names cdr define region [--grid | -g ]=grid_name region_name [list_of_instances] © 2014 IBM Corporation49

50 SQL Enhancements – Grid Queries  Example –Instances can be members of more than one region if needed –Regions can contain a subset of instances of another region The great_west region can contain g_inst_1, g_inst_2, g_inst_3, g_inst_4 The pac_coast region can contain g_inst_1, g_inst_3 cdr define region –g=my_grid region_1 g_inst_1 g_inst_2; cdr define region -–grid=my_grid region_2 g_inst_3 g_inst_4; © 2014 IBM Corporation50

51 SQL Enhancements – Grid Queries  Regions can not be modified, have to drop and recreate –Use the cdr delete region command cdr delete region region_1; © 2014 IBM Corporation51

52 SQL Enhancements – Grid Queries  After defining any regions that are needed, next step to executing grid queries is to verify the consistency of the tables across the grid –Ensures the table(s) schemas are identical across the grid All the columns and data types match so a consistent response to the query can occur –Prevents table alters from occurring unless executed through a grid operation Connect to the grid, execute alter, disconnect from the grid © 2014 IBM Corporation52

53 SQL Enhancements – Grid Queries  The cdr change gridtable command is used to identify tables for grid query operations aka “gridtables” –Syntax Where [--add | -a | --delete | -d] – the following table(s) can be added or removed from the gridtable list [--all | -A | list_of_tables ] – either all or list of tables (white space separated) cdr change gridtable [--grid | -g]=grid_name [--database | -D]=database_name [--add | -a | --delete | -d] [--all | -A | list_of_tables ] © 2014 IBM Corporation53

54 SQL Enhancements – Grid Queries  Examples –Notice long and short versions of the commands are used Add all ds2 database tables Delete the customer table from the gridtables list Add the my_tab_1 and my_tab_2 tables to the gridtables list cdr change gridtable –g=my_grid -D=ds2 -a –all; cdr change gridtable –g=my_grid -D=ds2 –d customer; cdr change gridtable –-grid=my_grid --database=ds2 --add my_tab_1 my_tab_2 © 2014 IBM Corporation54

55 SQL Enhancements – Grid Queries  With any regions defined, and tables added to the gridtable list, can now execute grid queries  Restrictions –Only query operations are supported –Queries must “simple”, can not contain subqueries, joins, unions or the intersect, minus, or except qualifiers The grid query itself *can* be a subquery nested inside another query though –Queries can only be executed against “normal” tables, not against views, synonyms or external tables Exception is sysmaster tables which are allowed –Queries can only be executed against data types that are supported by distributed operations Excludes TimeSeries and any extended types that can’t be used in distributed operations © 2014 IBM Corporation55

56 SQL Enhancements – Grid Queries  New syntax to the select operation – the grid clause –Syntax at the statement level –The optional all keyword determines whether or not all matching rows are returned ( a union all qualifier) from all instances or just the unique values (a basic union qualifier) Default is union or unique values select column_names from table_names grid [all] [‘region_name’ | ‘grid_name’] where..... © 2014 IBM Corporation56

57 SQL Enhancements – Grid Queries  New syntax to the select operation – at the session level –Rather than modifying existing statements, the scope and union qualifier can be set at the session level With the set environment select_grid operation Use the default union operator on the grid queries Use the union all operator on the grid queries set environment select_grid [‘grid_name’ | ‘region_name’] set environment select_grid_all [‘grid_name’ | ‘region_name’] © 2014 IBM Corporation57

58 SQL Enhancements – Grid Queries  New syntax to the select operation – at the session level (cont.) –Once set, all queries will execute as grid queries –Only one default setting can be active at a time To change from one setting to the other re-execute the set environment command –These settings can be changed for any individual statement(s) by including a grid clause within the statement © 2014 IBM Corporation58

59 SQL Enhancements – Grid Queries  To turn off grid queries –Use the option that matches the command enabling grid queries –Individual statement(s) can still execute as a grid query by including the grid clause in the statement(s) set environment select_grid default; set environment select_grid_all default; © 2014 IBM Corporation59

60 SQL Enhancements – Grid Queries  But what happens if a node with the grid / region is not available when the grid query executes? –It depends, can either Abort the query and return an error Execute the query with the available nodes  Can query later to see how many and which nodes didn’t participate  Set at a session level with the set environment ifx_node_skip operation © 2014 IBM Corporation60

61 SQL Enhancements – Grid Queries  Syntax –Where default | off – the default setting. Query is aborted and an error returned on – rows are returned from the nodes available at the time the query was executed  When the environment is set to on, any skipped nodes are captured and can be returned to the calling session –Important – this information is gathered on a statement-by-statement basis and only lasts until the next grid query is executed in that session set environment ifx_node_skip [ default | off | on ] © 2014 IBM Corporation61

62 SQL Enhancements – Grid Queries  To determine how many nodes, if any, were skipped and their names Returns an integer value with the number of skipped nodes Returns an lvarchar with the name of one of the skipped nodes To get all the node names, will need to execute this the same number of times returned by ifx_gridquery_skipped_nodes_count()  All names aren’t returned at once to facilitate easier parsing of the result set  The function operates like a fetch against a cursor execute function ifx_gridquery_skipped_nodes_count() execute function ifx_gridquery_skipped_nodes() © 2014 IBM Corporation62

63 SQL Enhancements – Grid Queries  Example set environment select_grid my_region; set environment ifx_node_skip on; select.... from.....; {results returned} execute function ifx_gridquery_skipped_nodes_count(); 3 execute function ifx_gridquery_skipped_nodes(); g_inst_2 execute function ifx_gridquery_skipped_nodes(); g_inst_19 execute function ifx_gridquery_skipped_nodes(); g_inst_38 © 2014 IBM Corporation63

64 SQL Enhancements – Grid Queries  In the event a schema change needs to be made –The change must be made through a grid operation as explained earlier –A metadata flag will be set indicating an alter operation is in-flight Prevents any grid queries from executing –As tables are being altered and acknowledged as completed, a cdr remaster gridtable operation is automatically executed to re-verify consistency across the cluster –When the cdr remaster gridtable operation returns 0 (zero), the metadata flag is removed and grid queries can be resumed –The cdr remaster gridtable operation can be invoked by an administrator to check the status of the alter operation Can know when to turn on grid query applications © 2014 IBM Corporation64

65 Flexible Grid - Deferred DDL Propagation © 2014 IBM Corporation

66 Deferred DDL Propagation in a Grid  One of the most difficult tasks administrative tasks is rolling out new versions of applications and the database schema changes that can accompany them  By their very definition, some sort of an outage is required on each node to update the binaries and / or database structures –Easy to accomplish in a dev / test environment where usage is limited and data volumes are usually smaller than in production –Very difficult to accomplish in production without disrupting end-user processing volume The Connection Manager can help with this by shuffling connections from one node to others then back again The application and database changes still have to be tested one last time at production levels without affecting the other nodes Not possible in an H/A (clone) cluster Possible but difficult in an ER / Grid cluster © 2014 IBM Corporation66

67 Deferred DDL Propagation in a Grid  With new flags to the ifx_grid_connect() procedure, it is possible to provisionally roll out schema changes for testing in a production grid –Data replication on the changed objects can be enabled if desired  Once limited, production level testing has occurred, the schema changes can be released to the rest of the grid  If the testing is not successful, the changes can be removed from the system © 2014 IBM Corporation67

68 Deferred DDL Propagation in a Grid  The ifx_grid_connect() procedure now supports the following options  Where the er_enabled flag can be: 0 – ER replicates for DDL operations are not created 1 – ER replicates for DDL operations are created 2 – ER replicates for DDL operations are not created and any ER / Grid errors are suppressed so the session may connect to the cluster 3 – ER replicates for DDL operations are created and any ER / Grid errors are suppressed so the session may connect to the cluster 4 – DDL operations are deferred, ER replicates are not created 5 – DDL operations are deferred, ER replicates are created execute procedure ifx_grid_connect(‘gridname’, ‘tagname’, er_enabled) © 2014 IBM Corporation68

69 Deferred DDL Propagation in a Grid  Still need to use the ifx_grid_disconnect() procedure to disconnect for the grid after executing the DDL operations –Its syntax has not changed © 2014 IBM Corporation69

70 Deferred DDL Propagation in a Grid  When connecting to the grid and executing deferred operations, the session must utilize tag(s) –Tags are used to identify a set of operations –Using tags, an administrator can Trace their execution Re-execute a failed operation Access information about tagged statements stored in the syscdr database  When ready to propagate the DDL changes, the tag(s) associated with the DDL command block(s) are used in the ifx_grid_release() procedure –Only one tag can be released at a time execute procedure ifx_release_grid(‘gridname’, ‘tagname’) © 2014 IBM Corporation70

71 Deferred DDL Propagation in a Grid  If testing fails, or for any other reason, the deferred change(s) need to be rolled back with the tag name(s) and the ifx_grid_remove() procedure execute procedure ifx_grid_remove(‘gridname’, ‘tagname’) © 2014 IBM Corporation71

72 Deferred DDL Propagation in a Grid  The design intent behind this feature is that a new node can be cloned into the grid using ifxclone(), new functionality tested there, then DDL changes pushed out to the rest of the cluster –Doesn’t have to be a new node, an existing node can be used as shown in the example later –Seed data can also be pushed out through a cdr sync()  Because the new node will inherit all the replication rules from the clone, it will continue to receive data updates from transactions occurring in the cluster  Any data changes made by testing to *existing* tables and data will be pushed out as well –Can be prevented by changing the replicates on the node from bi-directional to inbound only © 2014 IBM Corporation72

73 Deferred DDL Propagation in a Grid  If the DDL changes involved moving an existing column from one table to another, or deleting the column \ table, the replicates have to be re-mastered in order to continue receiving updates while testing © 2014 IBM Corporation73

74 Deferred DDL Propagation in a Grid  How can this be used in a real-life situation to implement a no- downtime application upgrade? –Connection Manager plays a key role –Management needs to understand that for a small amount of time, processing capacity will be affected  The following example –Only shows one Connection Manager agent but the process would be the same with more CM agents –Uses an existing node in a cluster but a new node could be cloned into the cluster The basic process is the same regardless of which option you select © 2014 IBM Corporation74

75 Deferred DDL Propagation in a Grid  Begin with a grid cluster and a CM agent –Load balanced between all three instances inst_1inst_2inst_3 SLA my_prod_sla DBSERVERS=(inst_1,inst_2,inst_3) Initial connection string information from CM agent © 2014 IBM Corporation75

76 Deferred DDL Propagation in a Grid  CM SLA modified to remove inst_3 with oncmsm -r –Existing connections are allowed to drain to 0 on inst_3 inst_1inst_2inst_3 SLA my_prod_sla DBSERVERS=(inst_1,inst_2) Initial connection string information from CM agent © 2014 IBM Corporation76

77 Deferred DDL Propagation in a Grid  New CM SLA added pointing to inst_3 inst_1inst_2inst_3 SLA my_prod_sla DBSERVERS=(inst_1,inst_2) SLA test_sla DBSERVERS=inst_3 Initial connection string information from CM agent © 2014 IBM Corporation77

78 Deferred DDL Propagation in a Grid  New DDL added to inst_3 with deferred apply  Application testing occurs through new SLA inst_1inst_2inst_3 SLA my_prod_sla DBSERVERS=(inst_1,inst_2) SLA test_sla DBSERVERS=inst_3 Initial connection string information from CM agent © 2014 IBM Corporation78

79 Deferred DDL Propagation in a Grid  Testing is successful, new app is rolled gradually into the app servers so new connections use the new version  Prod SLA refreshed to use inst_3 for the new connections inst_1inst_2inst_3 SLA my_prod_sla DBSERVERS=inst_3 Initial connection string information from CM agent Existing connections drain to zero © 2014 IBM Corporation79

80 Deferred DDL Propagation in a Grid  DDL changes rolled out using ifx_release_grid()  Any new seed data also pushed out –Data changes still being pushed by replication rules to rest of the cluster inst_1 inst_2inst_3 SLA my_prod_sla DBSERVERS=inst_3 Initial connection string information from CM agent © 2014 IBM Corporation80

81 Deferred DDL Propagation in a Grid  SLA updated to use all three instances –Back to a full load balanced environment inst_1 inst_2 inst_3 SLA my_prod_sla DBSERVERS=(inst_1, inst_2,inst_3) Initial connection string information from CM agent © 2014 IBM Corporation81

82 © 2013 IBM Corporation82 The Board of Directors of the International Informix Users Group (IIUG) announce the: 2014 IIUG Informix Conference 2014 IIUG Informix Conference April 27 – May 1, 2014 J.W. Marriott Hotel (Brickell) Miami, Florida, USA J.W. Marriott Hotel (Brickell) For more details visit the official conference web site www.iiug2014.org Register before February 28, 2014 and save $125.00 – See you in Miami! (remember IIUG members save an additional $100!) Attention Speakers We are now accepting Presentation Proposals thru Nov. 20, 2013. Details at the conference speaker page: www.iiug2014.org/speakerswww.iiug2014.org/speakers All Non IBM speakers selected receive a Complimentary Conference pass!! !

83 Questions © 2014 IBM Corporation83

84 Logo © 2014 IBM Corporation84

85 Logo © 2014 IBM Corporation85


Download ppt "IDS 12 - Replication Enhancements Scott Pickett & Carlton Doe WW & NA Informix Technical Sales For questions about this presentation contact:"

Similar presentations


Ads by Google