Presentation is loading. Please wait.

Presentation is loading. Please wait.

WMO TECO-WIS Convention Seoul, November 8th, 2006

Similar presentations


Presentation on theme: "WMO TECO-WIS Convention Seoul, November 8th, 2006"— Presentation transcript:

1 WMO TECO-WIS Convention Seoul, November 8th, 2006
Database Replication and Change Propagation Technologies for Continuous Availability We explain how and where database replication technologies fit in a global enterprise IT infrastructure for achieving continuous availability. IBM Replication products are introduced; their architecture, underlying technology, strengths, limitations, and current product research areas are discussed. Database replication technologies allow IT infrastructures to achieve continuous availability of their enterprise operations, by providing solutions for Disaster Recovery, Workload Isolation, and Information Integration. Database replication can capture and propagate changes with low-latency at high-throughput over long distances, while preserving database transactional integrity, and tolerating intermittent connectivity. When used for Disaster Recovery, each replicated database can be fully active, and copies do not need to be identical. Some trade-offs include administrative costs, the overhead of capturing and applying changes, and the lack of support for replicating certain Data Definition Language (DDL) operations. For Workload Isolation, the replication process must manage conflicts that may arise from the application workload, database constraints, loading a target database while changes are still occurring at the source, or changes arriving out of order in a multi-node configuration. Conflict resolution relies on timestamp and the origin of each change, or on a designated master. For Information Integration, the replication process must deal with heterogeneous data schemas, sometimes data stores, or even data models. Detailed abstract: Technologies for database replication have evolved from solution for disaster recovery, to providing high-availability, workload isolation, and data access performance improvements. These technologies are now evolving toward integrating information to and from heterogeneous sources, often with the need for mappings and transformations between data models. These applications for database replication technology are inclusive. Replication technologies are now used to obtain a global view of a distributed enterprise, in addition to providing continuous availability and a Disaster Recovery strategy. The replication process propagates database transactions, it can also load an initial copy of the target tables (full-refresh), whenever needed. The database replication process is also highly resilient: it can be interrupted and restarted at any time, and does not depend on a permanently dedicated transport. The WebSphere II Replication suite of products, SQL Replication, Q Replication, and Q Event Publisher provide functions for capturing database changes from heterogeneous data stores (including Classic sources such as IMS) and applying these changes to various targets: changes are captured either by reading the database recovery logs, or by installing triggers on the source database. The changes are captured into a staging area, either a relational table for SQL replication, or MQSeries queues for Q replication. changes are applied by reconstructing SQL statements for updating the data at the target data store. When the target is non-DB2, the changes are applied via WebSphere II federation, which makes the target data appears as if it were a local table in DB2. A DB2 database instance is then required for pushing the changes to, for example, an Oracle target. SQL Replication strengths are powerful data transformations (using SQL over the relational staging area) and scalability in 1 to large N configurations (the data being staged once for all targets). Q Replication strengths include performance 3 to 10 times better than SQL Replication and superior conflict resolution, particularly in peer-to-peer replication configurations. Q replication can achieve very high throughput (tens of thousands of database changes/second) and low latency (often sub-second) over long distances (thousands of kilometers). Q replication offers two methods for conflict detection and resolution for update-anywhere configurations. The value-based method compares propagated values against the current values at the target, and let the change from the designated master wins in case of conflict. The time-based method always preserves the change with the most recent timestamp, it uses the origin and the time of the change to detect and resolve conflicts. Because changes in multi-node configurations can arrive in any order due to network delays, the Apply program needs to be certain that it has seen updates from all nodes for a given point in time, before a conflict can be resolved. For example, the Apply program needs to remember a delete that arrives before the corresponding insert (due to network delays in a multi-node configuration), so it knows to ignore the insert when it finally arrives. Time-based conflict handling is used in peer-to-peer (or multi-master) configurations. Conflicts are logged by the Q Apply program in an EXCEPTIONS table; an administrator can analyze them and take corrective action, if needed. Applications generally prevent conflicts by logically partitioning updates across replicated databases. Conflict resolution is essential for DR scenarios, where changes not yet replicated are often replayed by the application at the standby after a failover, and need to be reconciled if and when the primary can be brought back online. We present the protocol for failover and fallback using Q Replication. We conclude with an overview of current product research and development, which aims to increase replication throughput, reduce end-to-end propagation latency, offer better scalability (large number of replicated nodes, either in distribution/consolidation one-way replication or in update-anywhere configuration), reduce overhead of the replication process (CPU, log size, extra database activity), better support the integration of heterogeneous data stores, and improve usability (ease of configuration, operations, and deployment). WMO TECO-WIS Convention Seoul, November 8th, 2006 Serge Bourbonnais Database Replication Silicon Valley laboratory

2 Abstract - IBM Database Replication Technologies
Database replication technologies allow an IT infrastructure to achieve continuous availability of the enterprise operations, by providing solutions for Disaster Recovery, Workload Isolation, and Information Integration. When used for Disaster Recovery, each replicated database can be fully active, and copies do not need to be identical. Some trade-offs include administrative costs, and the overhead of capturing and applying changes. For Workload Isolation, the replication process can manage conflicts that may arise from the application workload, database constraints, loading a target while changes are still occurring at the source, or changes arriving out of order in a multi-node configuration. Conflict resolution either relies on timestamp and the origin of each change, or on a designated master. Configurations for data distribution and consolidation to/from hundreds of databases can also be deployed. For Information Integration, the replication process deals with heterogeneous data schemas, data stores, or even data models. IBM Database replication technologies can capture and propagate changes with low-latency at high-throughput over long distances, while preserving database transactional integrity, and tolerating system outages or intermittent connectivity.

3 Agenda Database Replication Technologies for continuous availability
In support of the Global Enterprise: From Continuous Availability to Business Integration Where Database Replication fits Where replication does not fit IBM Product Architecture and Capabilities Capture, Apply, Federation, and Transforms Topologies, Conflict Detection and Resolution Sample Implementations Detailed abstract: Technologies for database replication have evolved from solution for disaster recovery, to providing high-availability, workload isolation, and data access performance improvements. These technologies are now evolving toward integrating information to and from heterogeneous sources, often with the need for mappings and transformations between data models. These applications for database replication technology are inclusive. Replication technologies are now used to obtain a global view of a distributed enterprise, in addition to providing continuous availability and a Disaster Recovery strategy. The replication process propagates database transactions, it can also load an initial copy of the target tables (full-refresh), whenever needed. The database replication process is also highly resilient: it can be interrupted and restarted at any time, and does not depend on a permanently dedicated transport. The WebSphere II Replication suite of products, SQL Replication, Q Replication, and Q Event Publisher provide functions for capturing database changes from heterogeneous data stores (including Classic sources such as IMS) and applying these changes to various targets: changes are captured either by reading the database recovery logs, or by installing triggers on the source database. The changes are captured into a staging area, either a relational table for SQL replication, or MQSeries queues for Q replication. changes are applied by reconstructing SQL statements for updating the data at the target data store. When the target is non-DB2, the changes are applied via WebSphere II federation, which makes the target data appears as if it were a local table in DB2. A DB2 database instance is then required for pushing the changes to, for example, an Oracle target. SQL Replication strengths are powerful data transformations (using SQL over the relational staging area) and scalability in 1 to large N configurations (the data being staged once for all targets). Q Replication strengths include performance 3 to 10 times better than SQL Replication and superior conflict resolution, particularly in peer-to-peer replication configurations. Q replication can achieve very high throughput (tens of thousands of database changes/second) and low latency (often sub-second) over long distances (thousands of kilometers). Q replication offers two methods for conflict detection and resolution for update-anywhere configurations. The value-based method compares propagated values against the current values at the target, and let the change from the designated master wins in case of conflict. The time-based method always preserves the change with the most recent timestamp, it uses the origin and the time of the change to detect and resolve conflicts. Because changes in multi-node configurations can arrive in any order due to network delays, the Apply program needs to be certain that it has seen updates from all nodes for a given point in time, before a conflict can be resolved. For example, the Apply program needs to remember a delete that arrives before the corresponding insert (due to network delays in a multi-node configuration), so it knows to ignore the insert when it finally arrives. Time-based conflict handling is used in peer-to-peer (or multi-master) configurations. Conflicts are logged by the Q Apply program in an EXCEPTIONS table; an administrator can analyze them and take corrective action, if needed. Applications generally prevent conflicts by logically partitioning updates across replicated databases. Conflict resolution is essential for DR scenarios, where changes not yet replicated are often replayed by the application at the standby after a failover, and need to be reconciled if and when the primary can be brought back online. We present the protocol for failover and fallback using Q Replication. We conclude with an overview of current product research and development, which aims to increase replication throughput, reduce end-to-end propagation latency, offer better scalability (large number of replicated nodes, either in distribution/consolidation one-way replication or in update-anywhere configuration), reduce overhead of the replication process (CPU, log size, extra database activity), better support the integration of heterogeneous data stores, and improve usability (ease of configuration, operations, and deployment).

4 Why Replication in an Information System?
Disaster Recovery Goal: High-Availability Applications: Standby copy for failover, Scheduled and Unscheduled outages Requirements: Minimize recovery time and eliminate or reduce data loss. Preserve transactional consistency. Workload Isolation Goal: High-Availability, Improve Performance Applications: Data Distribution/Consolidation, Regional Data Centers, Caches Requirements: Maintain live copies or subsets for working in disconnected mode, often geographically distributed. Need to detect and resolve conflicts, if any. Data mappings and transformations. Information Integration Goal: High-Availability, Improve Performance, Global Enterprise View Applications: Analytics, Enterprise Business Integration Requirements: Moving data to/from heterogeneous data stores. Cleansing and transformations. Assembling objects with data from several sources. Requirements Requirements Less From Disaster Recovery to Data Integration: Requirements are cumulating. Today's replication products address 1. and 2, mostly. A third scenario for replication technology, perhaps less often exploited today, but increasingly important is to use replication for data integration: replication can be used as an event publishing mechanism for integrating applications, or for maintaining a staging area from which to feed a data warehouse. Each one of these scenarios has different technical requirements. A product can be designed to address all three classes of scenarios, with some tradeoffs. These scenarios range from the most simple requirements (DR, where backup/restore of the file system can be one solution) to the most complex (EII, where data selection and transformations, and event notification require more complex technology). More

5 Database Replication Application Space 
From Disaster Recovery to Information Integration: Replication needs are cumulating - Semantics are increasing. Maintain a full database copy for Disaster Recovery Maintain a logical database subset for Disaster Recovery, Workload Isolation Publish changes (with transformations) for Disaster Recovery, Workload Isolation, Data Integration Requirements and Scenarios Database Replication Application Space  Database Relational tables Business Objects <order><oid>197</oid> <pid>AS207</pid> <desc>Wheel</desc> <qty>1</qty> </order> Propagated Objects At the top, we have Scenarios (such as DR) and Requirements (e.g., publish changes with transformations). The middle show the granularity of the objects propagated for each scenario. The bottom shows the technology, and existing Information Management products for each scenario. Initially we dealt with disk segments, log pages, then with rows, columns and SQL expressions. Now we are dealing with business objects that might combine information from several sources (often encoded using XML). The granularity of objects propagated or maintained by replication is finer as applications and requirements increase. Initially, DR was only concerned about keeping a database in hot standby mode. Logical replication deals with tables and rows. Today, we are moving toward a higher level of abstraction that reflects how applications use the data that is stored in the database. With the XML store, this representation is the XML model, the language SQL and XQuery, and the communication paradigm includes asynchronous messaging. Event Publishing is a particularly important play for Information Integrator. It can be used for synchronizing applications that use different and heterogeneous data stores (including files), maintaining cache, notifications to portals, etc. Publishing business objects as either WebSphere MQSeries messages, or Java Messaging Specification (JMS) messages facilitates integration with WebSphere applications. Log Shipping HADR (LUW) Disk Mirroring GDPS PPRC Logical Replication Data Propagator Q Replication Event Publishing Q Event Publish II Federation Integration Software DataStage Technologies and Products

6 Application Space for Database Replication Technologies
Database Replication is a good fit when: Asynchronous Capture and Delivery Outages. Network, servers, a site, the RDBMS. Occasionally connected Non-identical sources and targets Different platforms. OS, RDBMS, even data models Different shapes. Sub-setting required Row-level transformations., Codepages, Schemas Update-Anywhere with possible conflicts Only possible with replication When some data loss is tolerable in case of a major disaster. Often, solution can be designed to limit loss to a few seconds. Fast delivery over large distances (1000s km) Several s rows/second achievable (up to row/sec) Avoid or minimize full-refresh of data at the target Other factors: Minimize down-time, administrative cost, application performance impact The Database Replication process preserves the semantics of the data. Replication Technologies guarantee Transactional Consistency with Resilience

7 Limits of Database Replication technologies
Non-zero Data Loss required in case of Disaster (fire, flood) Use Synchronous technologies instead, i.e., HADR, PPRC Set-level .transformations are required on the data Use ETL software instead However, replication can be used to feed a staging area for ETL tools. Replication can hide the differences between the target and the source (database schema, data model, codepage, hardware architecture) and provide a continuous, asynchronous feed. Business Objects need to be assembled Develop applications in the application layer Other factors Cost-Benefit analysis of the solutions, given the requirements

8 IBM SQL Replication Capture Apply Apply
Staging is in relational tables Control, and Monitoring information also in relational tables Transport is over a database connection Source server Target servers Apply DB2 Staging Tables DB2 Apply DB2 Capture database recovery log Apply DB2 Information Integrator SQL DB2 Apply z/OS iSeries UDB LUW Control tables Data Propagator general architecture - Data is staged in relational tables: one per table to replicate. The Capture and Apply components are SQL applications. For LOG based data sources, a Capture program extracts changes from the recovery log and reconstructs SQL statements to insert these changes into the staging tables. For Trigger-based data sources, triggers on the source tables insert changes to a special set of staging tables, also one per source table. At the target database: for DB2 targets, the Apply program runs at the target pulling captured changes from the staging table; for non-DB2 targets, the Apply program typically runs at the source and pushes changes from the staging tables to the target database via DB2 Information Integrator, or DB2 DataJoiner (there is no ‘native’ Apply program for non-DB2 targets). Today, the Capture program can only run at the source system, and the staging tables also reside on the source system. The Apply program also copies data directly from the source database to the target databases when Replication is started for a table or view (cold start or full refresh). Using relational tables for staging data is unique to IBM. It allows leveraging SQL as a transformation language and provides flexibility. Capture Once - Apply N: Each data copy is a different set of tables, typically each one a different database on a remote server. However, it could also be distinct sets of tables on the same target database. Non-DB2 Non-DB2 Informix Oracle Sybase SQL Server Teradata Informix Oracle Sybase SQL Server Triggers

9 A parenthesis: Database Federation
Remote objects (structured files, tables, spreadsheets) appear to the application as if local tables in a DB2 database Local and non-local data can be manipulated in the same SQL statement CREATE NICKNAME ORAT3 FOR ORACLE9.SCOTT.T3 INSERT INTO ORAT3 VALUES(5) SELECT * FROM ORAT3 T1 is a Table; ORAT3 a Nickname Nicknames appear as local tables. For example: > db2 list tables Table/View Schema Type T BOURBON T ORAT BOURBON N CUSTOMERS BOURBON T Wrappers come with the DB2 Information Integrator product, they need to be installed once, by the DBA with the 'create wrapper' command. The 'create server' command is done once per data source. Nicknames allow mappings and column sub-setting. For example, mapping an integer to a character string. Information Integrator also allows federating both structured and unstructured data. For example, wrappers to flat files or XML documents are available.

10 IBM Q Replication Capture Apply Apply Apply Apply Apply Apply
Staging and Transport over MQSeries persistent message queues High-throughput, Low-latency. Apply with parallel agents Source server Target servers WebSphere Queue Manager (or client) Apply DB2 DB2 Admin queue Capture DB2 Apply Apply z/OS UDB LUW database recovery log Apply Send queues Control Tables Control tables Non-DB2 Informix Oracle Sybase SQL Server Teradata z/OS UDB LUW VM/VSE Apply Apply Apply Restart queue DB2 Information Integrator Control Tables

11 Performance Q Replication is between 3 to 10 times faster than SQL replication Higher throughput and shorter latency Capture measured throughput: 49000rows/second (V9.1) Latency less than 2 seconds achievable over 1000s of kilometers Measured Time to clear up receive queues after an outage 1,000,000 rows accumulated in target receive queue Continuous arrival rate: 5,000 rows per second Time to re-sync target database = 91 seconds (1) Turbo Freeway ( ) 2 LPARs 4CP for the source system and 4cp for the target system.

12 Q Replication subscriptions - defining target copies
Projection over columns and rows of a table: Only changes for subscribed tables are sent Some transactions can be ignored (e.g., by owner ID, trans ID, with signal or command) Some operations can be ignored (e.g., delete) Filter rows with a predicate (e.g., WHERE :LOCATION ='EAST' AND :SALES > (SELECT SUM(expense) FROM STORES WHERE stores.deptno = :DEPTNO) Database Schema mappings examples: 1 column to N columns, e.g., [ :C1 || :C2] N columns  1 columns, e.g., [substr(:C2,2,3)] Generated columns, e.g., [CURRENT TIMESTAMP] Different paradigm: Publish/Subscribe model. Target must match the subscription. Filtering predicate (to determine whether or not to send a captured changed) is passed to DB2 for evaluation by the Capture program. Transformations with DB2 II Queue Replication are limited to the Apply side. A user-supplied stored procedure can be invoked by the Apply program with the row to apply passed as argument. Therefore, the stored procedure does the actual apply of the row to the target table, possibly after transforming this row. A drawback is that the stored procedure must also provide conflict detection and resolution, if needed. Capture side: Apply side ORDERS oid price IBMORDERS ibmID price ts Replication handles codepage conversion, architecture difference

13 Q Replication Subscription Types
Unidirectional Changes are replicated in one direction 1:N – N:1 topologies – Distribution and Consolidation Changes can be filtered and transformed Bidirectional – master/slave Changes replicated in both directions Conflicts detected on data values: Conflict rules: Check key, changed only, or all columns One server designated as winner Conflict action: Force, ignore, merge change Tree topologies only Minimum overhead Peer to peer – no master, use timestamps Conflicts resolved by using most recent version, no master copy - Handles out of order arrivals (e.g., delete before insert) Requires extra columns and triggers Source Target(s) Primary Secondary/backup Q Replication for Disaster Recovery: Use value based conflict detection Minimal overhead Allows automatic failover Applications cannot be moved until data is replicated at switchback Quiesce applications briefly during switchback Using secondary as winner equivalent to “most recent update wins” Use version based conflict detection Incurs overhead to maintain version Allows automatic failover and switchback Most recent update wins

14 Data Distribution from a (CCD) staging area
SQL Apply Q Capture Q Apply Target Table Q Replication SQL Apply SQL Connections MQSeries Read/write Source Table Read-only CCD Table Target Table Non condensed – history Condensed – fanout SQL capture schema - So Q Apply can update the Register and pruntcntl tables CD owner only for the GUI Make sure target tables are in sync before SQL repl is started. Missed apply cycles if not loaded. SQL Apply Target Table

15 Consistent Changed Data (CCD) Apply targets
Usages: AUDIT trail of database changes. Answer: Who changed what, when, and how? Staging table for data distribution (with SQL Apply) PARTS_CCD COMMITSEQ AUTHID OPERATION LOGMARKER XPARTNO PARTNO XPRICE PRICE current timestamp 1 USER_A U A7571 A7571 4.31 5.03 CCD tables contain up to eight IBM-defined columns. We’re going to take a look at three of them. The IBMSNAP_OPERATION column is a flag that indicates the type of operation. That is, insert, update, or delete. The IBMSNAP_INTENTSEQ column contains a log record sequence number that uniquely identifies a given row change. The IBMSNAP_COMMITSEQ column contains the log record sequence number of the commit for a transaction. In other words, if you look at the combination of intentseq and commitseq, you can see the order of changes within a transaction. The remaining columns are documented in our existing books and are not covered by this presentation. Or, if you have the speaker notes, you can read about them right now  The IBMSNAP_LOGMARKER column is the time that the data were committed. The IBMSNAP_AUTHID column is the authorization ID that is associated with the transaction. This column is for used for both Linux, Unix and Windows and z/OS. For z/OS, this is the primary authorization ID. The IBMSNAP_AUTHTKN column is the authorization token that is associated with the transaction. This column is for z/OS only and it is the correlation ID - it will be NULL for Linux, Unix and Windows. The IBMSNAP_PLANID column is the plan name that is associated with the transaction. This column is for z/OS only - it will be NULL for Linux, Unix and Windows. And finally the IBMSNAP_UOWID column is the unit-of-work identifier from the log record for this unit of work. This column is used for both Linux, Unix and Windows and z/OS. current timestamp 2 USER_B I null A7981 null 121.03 current timestamp 3 USER_A D null A7981 null null For updates, before values can be optionally present in the CCD (e.g., XPARTNO) Condensed CCD: Contains only the latest changed value of each row Complete CCD: Initially created with values for all rows from the source table.

16 Event Publishing Capture Function Capture changed data in real time
Correlate by transactions within a single database Output: XML or CSV Usage Building the Data Warehouse Business Integration Auditing requirements DB2 z/OS and LUW DataStage Log-based capture WebSphere Business Integration WebSphere II Event Publishers facilitate numerous business integration scenarios. They make it easy to link data events with business processes as well as distributed data targets. Any application or service that integrates either with WebSphere MQ directly or supports Java Message Service (JMS) can asynchronously receive the data changes as they are published. Some of the scenarios ideally addressed by event publishing are: Application-to-application integration - e.g. Operational customer data changes can be pushed to a packaged customer relationship management (CRM) application Initiate business processes – e.g. A customer record “insert” could initiate a new customer process that includes sending a welcome , executing a credit verification, and updating CRM system data. Monitor critical data events whose values indicate the need for a specific process – e.g. A monitored inventory value can be used to drive a product restocking workflow when a threshold-value is reached. Feed a data warehouse, data mart or operational data store – e.g. Changed data can be pushed to an extract-transform-load (ETL) product that then populates a data store. Target DBs IMS VSAM Capture WebSphere MQ WebSphere MQ Integrator Broker User Application JMS-aware Application Software AG Adabas CA IDMS

17 Mazda Challenge Solution Business benefits Increased auto sales
Support 700 dealers in USA Trouble matching customer demand with available inventory More current data needed to track sales achievements with period-end goals Business benefits Increased auto sales Improved dealer satisfaction Currency of information improved by 93% Technology benefits Re-used existing application and data base infrastructure Decreased network load compared to full data refreshes 4 times an hour Ease and speed of deployment Solution Sales and inventory information is replicated every minute to portal server Improved access to current data without changes to existing IT infrastructure Source: Current As Of Date: 02/2004 Location: NA – Irvine, CA - USA Industry: Automotive Customer Background: Based in Hiroshima Japan, the Mazda Motor Corporation manufactures a diverse line of passenger cars and commercial vehicles. Mazda cars and trucks are assembled in 14 countries around the world, and it is the only auto manufacturer that features three types of engines, including conventional gasoline piston, diesel and rotary. Mazda first delivered vehicles to the U.S. in 1970, and in 1987 the company entered an alliance with Ford. As of July 1, 2002 the Mazda/Ford alliance employed approximately 1,900 people at its Flat Rock, Michigan manufacturing plant, with an annual production capacity of approximately 240,000 units. Mazda North American Operations (MNAO) is responsible for the sales and marketing, customer service and parts support of Mazda vehicles in the U.S. Headquartered in Irvine, Calif., MNAO has more than 700 dealerships nationwide. Business Need: Mazda dealers in the U.S. were having trouble matching customer demand with available inventory because data in the Mazda dealer portal was not available in realtime. Given network bandwidth limitations, full refreshes of data on the portal were limited to every 15 minutes. And to track achievements related to period-end sales goals, dealers needed more timely data. Consequently, MNAO wanted a solution that could improve the timeliness of reporting sales and inventory information to a nationwide network of more than 700 authorized dealers, while leveraging existing technology. Solution: MNAO implemented a solution that uses IBM WebSphere Information Integrator to replicate sales and inventory information from IBM DB2 UDB running on MVS to a Microsoft SQL Server (the portal data server) every minute. Previously, network considerations limited sales updates to four times an hour. The WebSphere Information Integrator solution was ideal because it enabled Mazda to significantly improve access to sales data without requiring major changes to its existing information technology (IT) infrastructure, including the SQL Server-based reporting database. Benefits: By providing sales people with up-to-the-minute data regarding sales and inventory information, the responsive WebSphere Information Integrator solution is helping Mazda increase auto sales and dealer satisfaction while using the existing application infrastructure. The improved sales reporting system has been enthusiastically welcomed by MNAO's network of 700 dealers, and it has improved sales and inventory information while decreasing the network load (previously, updating the inventory only four times an hour placed a dramatic load on the company wide network). The solution also saved MNAO money because it did not require the customer to purchase additional software, allowing it to continue to leverage its SQL Server database. “Within 5 weeks of receiving the [WebSphere] Information Integrator product we were able to implement it in our … environments. It now provides us up to the minute sales activity.” Joe Neria, Software Consultant. Mazda

18 International provider of financial & investment services
Challenge Business benefits Replicating 5-10 Million transactions with less than 2 seconds latency. More efficient and cost-effective resource utilization Secondary platform services reporting and business intelligence queries and acts as backup to primary Technology benefits Real-time back up of secondary system provides results in increased capacity for peak workloads. Corporate initiative to provide customers better performing real-time queries by utilizing multiple sites. Replication of critical order processing details for core business functionality Solution Q Replication for high speed movement of up to 10 Million transactions to secondary site several thousand miles away. Current implementation is Uni-Directional with peer-to-peer plans. Replicating trading data from Texas to New Hampshire. Less than 2 seconds latency cross-country!

19 CitiStreet Challenge Solution Overview
CitiStreet is one of the largest and most experienced global benefits providers servicing over 9 million plan participants across all markets. CitiStreet was formed in partnership between subsidiaries of State Street Corporation and Citigroup Business benefits Ensure application availability for plan participants and sponsors The new solutions from IBM will improve data integrity with a reduced level of maintenance Technology benefits Maintain bi-directional synchronization of profile updates (approx 175,000 updates daily) in real time Support single sign-on access through both Web and IVR applications ensuring 24x7 portal access for plan participants and sponsors Solution Support redundant, active single sign-on applications for failover processing replicating profile changes between them in real time. “Since nearly 10 million of CitiStreet customers are offered 24-hour access to their retirement accounts, the company can't afford downtime and must be able to replicate data changes when they happen. We fully replicate our database over redundancy data lines, so to us the stability and speed of that asynchronous replication is strategic for us." Source: Current As Of Date: Location: Industry: Financial Services CitiStreet is using WebSphere Information Integrator’s new Q replication for supporting availability and performance for their single sign-on (SSO) application. Participants and plan sponsors access applications either through the Web or through an Interactive Voice Response application. These requests are routed to an SSO application supported by a DB2 UDB database running on AIX. The SSO application authenticates the user and manages profile updates. Profile changes are immediately replicated to the other SSO database (about 175,000 daily). If a failure occurs in one database, then authentication requests are re-routed to the other, eliminating any downtime in the authentication process. Likewise, CitiStreet provides redundant data lines for replication among DB2 instances to ensure the speed and stability of the replication process itself. Customer Background: Business Need: Solution Implementation: 3rd Party Components: Benefits of the Solution: Customer Quote:   Barry Strasnick , CIO CitiStreet

20 Summary IBM develops Data Propagation technologies to provide Continuous Availability and achieve a Global Integrated view of the enterprise in an heterogeneous environment Q Replication (IBM WebSphere Replication Server) delivers low latency, high throughput, and resilience. It is best-of-breed for heavy OLTP workloads, providing resilience and preserving transactional integrity throughout outages while minimizing the need for full data refreshes. Q Replication solution is industrial-strength for heavy OLTP workloads, it is best-of-breed for resilience and for preserving transactional integrity throughout outages and for minimizing the need for full data refreshes. Other points worth adding as differentiators: 1. Apply handles Referential-Integrity and Unique constraints effectively (e.g., it reorder transactions if needed, etc) 2. Apply can automatically do the initial load of the data efficiently (up to 16 loads in parallel)


Download ppt "WMO TECO-WIS Convention Seoul, November 8th, 2006"

Similar presentations


Ads by Google