Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adaptive Server IQ Multiplex. AS IQ Multiplex All of the released software for ASIQ is now Multiplex enabled All platforms are now Multiplex enabled You.

Similar presentations


Presentation on theme: "Adaptive Server IQ Multiplex. AS IQ Multiplex All of the released software for ASIQ is now Multiplex enabled All platforms are now Multiplex enabled You."— Presentation transcript:

1 Adaptive Server IQ Multiplex

2 AS IQ Multiplex All of the released software for ASIQ is now Multiplex enabled All platforms are now Multiplex enabled You can still run a “simple” IQ system, however this is called a Simplex system

3 Glossary - 1 SimplexAn IQ installation with only one Node or instance of IQ Running MultiplexAn IQ installation with multiple instances of IQ executing against a single IQ database Writer NodeThe only IQ instance in a multiplex that can execute DML commands

4 Glossary – 2 Reader NodeAll the other instances of IQ running in the multiplex that are excluded from running DML commands SynchronizeThe command to implement DDL changes across the Multiplex

5 What is a Multiplex? An IQ Multiplex consists of: –A number of nodes, each running IQ 12 –A shared disk subsystem connected to all nodes –One node designated as the writer node, others readers Why do IQ Multiplex? –Scale query performance past limits of one node –Match I/O to CPU capabilities –This is important IQ-M will NOT increase individual query speed

6 A Multiplex –Marketing View IQ 12.0 provides Table Level Versioning Version maintained while queries are running. Users see no change in database until update is complete Can add as many CPUs as needed to speed up load process, 100% paralleled. IQ Multiplex functions VLM Alpha Server IQ VLM Unix IQ Compaq Server IQ VLM Unix/NT Compaq Server IQ VLM Unix/NT Compaq Server IQ VLM Unix/NT IQ

7 A Multiplex – Real View IQ Main Store AS/IQ 12.0 IQ Message Log Catalog Store IQ Temp. Store

8 How Does it work? We allow for the creation of two sorts of IQ-M engines –A Writer Node –Reader Nodes In a reader we restrict the ability to write to the IQ Main Store, without impacting IO to the catalog and the IQ Temp Store In the writer node we restrict the ability to perform DDL changes unless the writer is brought up in Simplex Node (the only IQ-M instance executing in the Multiplex

9 Steps to Generate - 1 In a very simple form the steps to build Multiplex are as follows –Create a shared area of disk for the IQ Main store  This is usually a fiber channel disk array (but could be an NFS disk area) –Create an area of disk that can be seen (mounted) by all of the nodes (computers) in the Multiplex  This is (practically always) and NFS region

10 Steps to Generate – 2 Build an IQ-M database using what will become the Writer Node Create the reader nodes Test the system Bring down the Multiplex and bring up the writer in simplex mode Build all the tables and indexes suing the Simple Writer

11 Steps to Generate – 3 Synchronize the Multiplex Bring up the Writer and all the readers nodes Tune the Writer and Reader Nodes Apply the SQL changes required by each node –Users/permissions etc.

12 Sybase Central All of the above routines will be performed by Sybase Central The work required to generate the Multiplex is quite high and is infinitely easier to accomplish using Sybase Central The only area that I would still script would be the DDL (table and Index creation) Maybe I would also, possibly, run the synchronize as a script – although this is not an every day operation

13 Tuning There are three areas of immense importance when considering the tuning of a Multiplex system 1.Get the disk layout correct! 2.Tune the Writer Node for load speed 3.Tune the Readers nodes for query performance –Maybe even tune individual nodes differently for differing query or user types

14 Disk Layout This is so vital I cannot begin to emphasize this enough The entire performance of the Multiplex will come down to how well the disk has been laid out There are really only 2 disk subsystems we should consider for IQ-M –EMC disk arrays –2-d striping on a disk farm (SOS)

15 EMC We have already discussed this in some detail earlier in the course Multiplex is where the speed, power and resilience of the EMC strategy really pay off Also, generally EMC technicians will tune the array to how we want it to work

16 2-Dimensional Striping (SoS) Again we have talked about this earlier This must be done correctly We have got to set up the RAID striping and the IQ striping very accurately This has to be done in conjunction with the determination of the IQ Page and Block sizes Generally we will be playing in the 128Kb and 256Kb areas for IQ Page Size

17 Synchronization - 1 Before we talk about tuning the nodes it is time to discuss Multiplex Synchronization When DML changes are applied by the writer to the multiplex the versioning information is passed via ASA replication –This is automatic and completely user invisible When a writer (in Simplex Mode) applies a DDL change to the database the readers need to be synchronized to the writer

18 Synchronization – 2 The synchronization process is simply the copying of the catalog store from the writer to each of the reader nodes So effectively the readers are then clones of the writer node In order to tune the readers (maybe because of differing hardware or query constraints) we need to have the ability to modify them

19 Reader Configuration For the writer node and each reader node there is a directory structure under which the execute (shared disk) Each instance has its own config file that none of the Multiplex operations touch It is in this config file that we place the startup configurations for the reader (and writer) nodes Here is where we can specify the cache sizes etc.

20 postsync.sql – 1 After a synchronization all the writers have exactly the same catalog store (apart from some subtle changes due to file positioning for the temp stores etc.) In order to allow for differing user populations or configurations or permissions between the reader nodes there is a file (for each reader) that is executed the first time a reader is brought up after a synchronization

21 postsync.sql – 2 Any sql can be placed in this file for execution after a synchronization The only restrictions are –No DDL (not allowed on a multiple) –No DML (only allowed for the writer) –No transaction statements Generally this would be used for the addition (or removal) users, permissions etc.

22 postsync.sql – 3 We can add a series of stored procedures on to the Writer Node, then after synchronization these are all on all the reader nodes These procedures may perform all of the complex tasks to set up the individual configurations for each reader. In this case the postsync.sql file will merely contain the execution instructions for the particular procedure that applies to this specific reader node

23 Configuring Nodes As we have just seen we have these two methods to configure the nodes for specific tasks The Writer should (in an ideal system) only handle loads, inserts, updates and deletes This will allow for the Writer to be a load engine The readers, relieved of the DML processes can then be configured as query engines

24 Writer Configuration Everything we have talked about in the course to date, regarding configuration, has been about balancing the servers for read and writer performance With IQ-M that need has been removed, we can tune each instance of IQ-M for a specific task The next few slides discuss what we would want to do to generate a write only server

25 Writer Memory Three things are important here –The size of the Load Memory –The size of the Main Cache –The size of the Temp Cache Basically we are only concerned with having enough Main Cache to hold the FP, LF and HNG active pages We need s much Temp Cache as we can for the HG load The Load Memory should be pegged as high as we can make it

26 Loading Main Cache ASCII Flat File Load Memory Temp. Cache Main IQ Store Temp. IQ Store Raw IO BuffersIntermediate Buffers Single Thread Multi Thread Row Delimited! Or it is Single Thread Single Thread Per HNG, LF and FP Single Thread Per HG (maybe 2) 1 2 3 Write of Completed HG B-tree G-Arrays and Bitmaps to Main Cache 1 Write of completed HNG, LF and FP Pages to Main IQ Store Write of completed HG B-trees, G- Arrays and Bitmaps to Main IQ Store 2 3 IO Operations to Temp Store indicate Not Enough Temp. Cache Sorting Groups Building B-Trees Building Simple Index Pages Pass 1 Pass 2 Set to Max 500MB small as we need large as possible

27 Writer Main Cache - 1 The main cache for the writer should be as small as we can get away with We need space for the following pages/buffers –FP Block Maps, Identity Pages, last pages –LF Block Maps, Identity Pages, Bitmap Headers and last page of each bit map –HNG Block Maps, Identity Pages, Bitmap Headers and last page of each bit map

28 Writer Main Cache - 2 Notice there is no space allocated for the HG indexes The majority of the space requirements for the HG indexes is in Temp Cache However we will need to read in the B-trees and the G- Arrays for the HG index so don’t be slavish as to the smallness of the Main Cache

29 Load and User Memory Do not constrain this The server cannot (at present) allocate more than 500MB for this, so assume the server will require and use this amount of RAM User Overhead will also be minimal, generally you will be only running one writer task at a time If you are running multiple table loads simultaneously, then remember each load will take up to 500MB of Load Memory

30 Writer Temp Cache Whatever is left… And then check with IQ Monitor to confirm minimal disk activity into the Temp Store

31 Reader Configuration There are three areas to consider here –What is the hardware configuration of the writer nodes  Are they the same or different –What is each reader going to run  How are you partitioning the readers  By User ID  By Query Type –Do you have Network Spoofing or Open Switch above the reader nodes?

32 Reader Hardware If the readers are all different specification machines then the config files need to be altered for each machine Even if they are all the same then you may want differing memory cache sizes to allow for the faster operation of specific query types

33 Query/User Partitioning If you are going to partition differing queries and or users then you will need to modify the following areas –Memory (probably) –User and Permission system tables –Query Options  Pre-Fetch  Sweeper threads and wash area (small)  Etc.

34 Network Spoofing If you are going to have users sign on to any machine, maybe even the same application talking to multiple IQ instances you should consider setting strict mode on

35 Strict Mode When you generate an IQ Multiplex you can set the version transmission into Strict or Non Strict Mode Non-Strict mode allows for readers to see the new versions as soon as they are ready, however it is possible for two readers to have access to differing versions of the same table Strict mode will only allow one view of the truth effectively no reader will allow any transaction to see a new version – until all the readers have that version This can will and does slow the servers down, especially if you are running DML during the operational day

36 AS IQ-M - End


Download ppt "Adaptive Server IQ Multiplex. AS IQ Multiplex All of the released software for ASIQ is now Multiplex enabled All platforms are now Multiplex enabled You."

Similar presentations


Ads by Google