Presentation is loading. Please wait.

Presentation is loading. Please wait.

Storage Optimization Strategies Techniques for configuring your Progress OpenEdge Database in order to minimize IO operations Tom Bascom, White Star Software.

Similar presentations


Presentation on theme: "Storage Optimization Strategies Techniques for configuring your Progress OpenEdge Database in order to minimize IO operations Tom Bascom, White Star Software."— Presentation transcript:

1 Storage Optimization Strategies Techniques for configuring your Progress OpenEdge Database in order to minimize IO operations Tom Bascom, White Star Software tom@wss.com

2 A Few Words about the Speaker Tom Bascom; Progress user & roaming DBA since 1987 President, DBAppraise, LLC – Remote database management service for OpenEdge. – Simplifying the job of managing and monitoring the world’s best business applications. – tom@dbappraise.com VP, White Star Software, LLC – Expert consulting services related to all aspects of Progress and OpenEdge. – tom@wss.com 2

3 We Will NOT be Talking about: SANs Servers Operating systems RAID levels … and so forth. 3

4 What Do We Mean by “Storage Optimization”? The trade press thinks it means BIG DISKS. Your CFO thinks it means BIG SAVINGS. Programmers think it means BIG DATABASES. SAN vendors think it means BIG COMMISSIONS. DBAs seek the best possible reliability and performance at a reasonable cost. 4

5 5 The Foundation of OpenEdge Storage Optimization

6 Type 2 Storage Areas Type 2 storage areas are the foundation for all advanced features of the OpenEdge database. Type 2 areas have cluster sizes of 8, 64 or 512. Cluster sizes of 0 or 1 are Type 1 areas. Data blocks in Type 2 areas contain data from just one table. # misc32 storage area d “misc32_dat":11,32;8. 6

7 Only Read What You Need Because data blocks in Type 2 storage areas are “asocial”: – Locality of reference is leveraged more strongly. – Table-oriented utilities such as index rebuild, binary dump and so forth know exactly which blocks they need to read and which blocks they do not need to read. – DB features, such as the SQL-92 fast table scan and fast table drop, can operate much more effectively. 7

8 MYTH Storage optimization is just for large tables. Type 2 storage areas are just for large tables. 8

9 Truth Very small, yet active tables often dominate an application’s IO profile. And type 2 areas are a very powerful tool for addressing this. 9

10 Case Study A system with 30,000 record reads/sec: – The bulk of the reads were from one 10,000 record table. – Coincidentally, Big B was set to 10,000. – That table was in a Type 1 area, and its records were widely scattered. – Moving the table to a Type 2 Area patched the problem. Only 2% of –B was now needed for this table! – Performance improved dramatically. 10

11 Type 2 Storage Area Usage Always use type 2 areas… … for areas that contain data, indexes or LOBS. The schema area is a type 1 area  11

12 12 How to Define Your Storage Areas

13 Use the Largest DB Block Size Large blocks reduce IO; fewer operations are needed to move the same amount of data. More data can be packed into the same space because there is proportionally less overhead. Because a large block can contain more data, it has improved odds of being a cache “hit.” Large blocks enable HW features to be leveraged: especially SAN HW. 13

14 What about Windows? There are those who would say “except for Windows.” (Because Windows is a 4K-oriented OS.) I have had good success with Windows & 8k blocks. NTFS can be changed to use an 8k block… 14

15 Use Many (Type 2) Storage Areas Do NOT assign tables to areas based on “function.” Instead group objects by common “technical attributes.” Create distinct storage areas for: – Each very large table – Tables with common Rows Per Block settings – Indexes versus data 15

16 16 Record Fragmentation

17 Fragmentation and Scatter “Fragmentation” is splitting records into multiple pieces. “Scatter” is the distance between (logically) adjacent records. 17

18 $ proutil dbname –C dbanalys > dbname.dba … RECORD BLOCK SUMMARY FOR AREA "APP_FLAGS_Dat" : 95 ------------------------------------------------------- Record Size (B) -Fragments- Scatter Table Records Size Min Max Mean Count Factor Factor PUB.APP_FLAGS 1676180 47.9M 28 58 29 1676190 1.0 1.9 … Fragmentation and Scatter “Fragmentation” is splitting records into multiple pieces. “Scatter” is the distance between (logically) adjacent records. 18

19 Fragmentation and Scatter “Fragmentation” is splitting records into multiple pieces. “Scatter” is the distance between (logically) adjacent records. 19 $ proutil dbname –C dbanalys > dbname.dba … RECORD BLOCK SUMMARY FOR AREA "APP_FLAGS_Dat" : 95 ------------------------------------------------------- Record Size (B) -Fragments- Scatter Table Records Size Min Max Mean Count Factor Factor PUB.APP_FLAGS 1676180 47.9M 28 58 29 1676190 1.0 1.9 …

20 Create Limit The minimum free space in a block Provides room for routine record expansion OE10.2B default is 150 (4k & 8k blocks) Must be smaller than the toss limit Only rarely worth adjusting 20

21 Toss Limit The minimum free space required to be on the “RM Chain” Avoids looking for space in blocks that don’t have much Must be set higher than Create Limit. Default is 300 (4k & 8k blocks) Ideally should be less than average row size Only rarely worth adjusting 21

22 22 Fragmentation, Create & Toss Summary

23 Create and Toss Limit Usage SymptomAction Fragmentation occurs on updates to existing records. You anticipated one fragment, but two were created. Increase Create Limit - or - Decrease Rows Per Block There is limited (or no) fragmentation, but database block space is being used inefficiently, and records are not expected to grow beyond their original size. Decrease Create Limit - or - Increase Rows Per Block You have many (thousands, not hundreds) of blocks on the RM chain with insufficient space to create new records. Increase Toss Limit 23 * Create and Toss limits are per area for Type 1 areas and per table for Type 2 areas.

24 24 Rows Per Block

25 Why not “One Size Fits All”? A universal setting such as 128 rows per block seems simple. And for many situations it is adequate. But… Too large a value may lead to fragmentation and too small to wasted space. It also makes advanced data analysis more difficult. And it really isn’t that hard to pick good values for RPB. 25

26 Set Rows Per Block Optimally Use the largest Rows Per Block that: – Fills the block – But does not unnecessarily fragment it Rough Guideline: – Next power of 2 after BlockSize / (AvgRecSize + 20) – Example: 8192 / ( 220 + 20) = 34, next power of 2 = 64 Caveat: there are far more complex rules that can be used and a great deal depends on the application’s record creation & update behavior. # misc32 storage area d “misc32_dat":11, 32 ;8. 26

27 27 RPB Example

28 Set Rows Per Block Optimally BlkSzRPBBlocksDisk (KB) Waste/ Blk %UsedActual RPB IO/1,000 Recs 143,015 12486%3333 442,50010,0002,96523%4250 481,2505,0002,07546%8125 4166272,50829592%1662 4325962,38411297%1759 842,50020,0007,06011%4250 8166255,0004,38344.761662 8323132,50480690%3231 8642862,28811498%3529 81282852,28010998%3529 Original 28

29 Set Rows Per Block Optimally BlkSzRPBBlocksDisk (KB) Waste/ Blk %UsedActual RPB IO/1,000 Recs 143,015 12486%3333 442,50010,0002,96523%4250 481,2505,0002,07546%8125 4166272,50829592%1662 4325962,38411297%1759 842,50020,0007,06011%4250 8166255,0004,38344.761662 8323132,50480690%3231 8642862,28811498%3529 81282852,28010998%3529 Original Oops! 29

30 Set Rows Per Block Optimally BlkSzRPBBlocksDisk (KB) Waste/ Blk %UsedActual RPB IO/1,000 Recs 143,015 12486%3333 442,50010,0002,96523%4250 481,2505,0002,07546%8125 4166272,50829592%1662 4325962,38411297%1759 842,50020,0007,06011%4250 8166255,0004,38344.761662 8323132,50480690%3231 8642862,28811498%3529 81282852,28010998%3529 Original Suggested Oops! 30

31 Rows Per Block Caveats Blocks have overhead, which varies by storage area type, block size, Progress version and by tweaking the create and toss limits. Not all data behaves the same: – Records that are created small and that grow frequently may tend to fragment if RPB is too high. – Record size distribution is not always Gaussian. If you’re unsure – round up! 31

32 32 Cluster Size

33 Blocks Per Cluster When a type 2 area expands it will do so a cluster at a time. Larger clusters are more efficient: – Expansion occurs less frequently. – Disk space is more likely to be contiguously arranged. # misc32 storage area d “misc32_dat":11,32; 8. 33

34 Why not “One Size Fits All”? A universal setting such as 512 blocks per cluster seems simple… 34

35 Set Cluster Size Optimally There is no advantage to having a cluster more than twice the size of the table. Except that you need a cluster size of at least 8 to be Type 2. Indexes are usually much smaller than data. There may be dramatic differences in the size of indexes even on the same table. 35

36 Different Index Sizes $ proutil dbname –C dbanalys > dbname.dba … RECORD BLOCK SUMMARY FOR AREA "APP_FLAGS_Dat" : 95 ------------------------------------------------------- Record Size (B) -Fragments- Scatter Table Records Size Min Max Mean Count Factor Factor PUB.APP_FLAGS 1676180 47.9M 28 58 29 1676190 1.0 1.9 … INDEX BLOCK SUMMARY FOR AREA "APP_FLAGS_Idx" : 96 ------------------------------------------------------- Table Index Flds Lvls Blks Size %Util Factor PUB.APP_FLAGS AppNo 183 1 3 4764 37.1M 99.9 1.0 FaxDateTime 184 2 2 45 259.8K 72.4 1.6 FaxUserNotified 185 2 2 86 450.1K 65.6 1.7 36

37 37 Logical Scatter

38 Logical Scatter Case Study 38 A process reading approximately 1,000,000 records. An initial run time of 2 hours. – 139 records/sec. Un-optimized database.

39 Perform IO in the Optimal Order TableIndex%Sequential%Idx UsedDensity Table1t1_idx1*0%100%0.09 t1_idx20% 0.09 Table2t2_idx169%99%0.51 t2_idx2*98%1%0.51 t2_idx374%0%0.51 4k DB Block 39

40 Perform IO in the Optimal Order TableIndex%Sequential%Idx UsedDensity Table1t1_idx1*0%100%0.09 t1_idx20% 0.09 Table2t2_idx169%99%0.51 t2_idx2*98%1%0.51 t2_idx374%0%0.51 TableIndex%Sequential%Idx UsedDensity Table1t1_idx1*71%100%0.10 t1_idx263%0%0.10 Table2t2_idx185%99%1.00 t2_idx2*100%1%1.00 t2_idx383%0%0.99 4k DB Block 8k DB Block 40

41 Perform IO in the Optimal Order TableIndex%Sequential%Idx UsedDensity Table1t1_idx1*0%100%0.09 t1_idx20% 0.09 Table2t2_idx169%99%0.51 t2_idx2*98%1%0.51 t2_idx374%0%0.51 TableIndex%Sequential%Idx UsedDensity Table1t1_idx1*71%100%0.10 t1_idx263%0%0.10 Table2t2_idx185%99%1.00 t2_idx2*100%1%1.00 t2_idx383%0%0.99 4k DB Block 8k DB Block 41 Oops!

42 Logical Scatter Case Study Block Size Hit Ratio %SequentialBlock ReferencesIO OpsTime 4k9569319,71919,208120 4k9869320,1499,81660 4k9969320,3506,41640 8k9585160,0269,41755 8k9885159,8054,74630 8k9985160,0083,19220 The process was improved from an initial runtime of roughly 2 hours (top line, in red) to approximately 20 minutes (bottom) by moving from 4k blocks and 69% sequential access at a hit ratio of approximately 95% to 8k blocks, 85% sequential access and a hit ratio of 99%. 42

43 43 Avoid IO, But If You Must…

44 … in Big B You Should Trust! LayerTime# of Recs # of OpsCost per Op Relative Progress to –B0.96100,000203,4730.0000051 -B to FS Cache10.24100,00026,7110.00038375 FS Cache to SAN5.93100,00026,7110.00022245 -B to SAN Cache*11.17100,00026,7110.000605120 SAN Cache to Disk200.35100,00026,7110.0075001500 -B to Disk211.52100,00026,7110.0079191585 * Used concurrent IO to eliminate FS cache 44

45 New Feature! 10.2B supports a new feature called “Alternate Buffer Pool.” This can be used to isolate specified database objects (tables and/or indexes). The alternate buffer pool has its own distinct –B. If the database objects are smaller than –B, there is no need for the LRU algorithm. This can result in major performance improvements for small, but very active, tables. proutil dbname –C enableB2 areaname Table and Index level selection is for Type 2 only! 10.2B supports a new feature called “Alternate Buffer Pool.” This can be used to isolate specified database objects (tables and/or indexes). The alternate buffer pool has its own distinct –B. If the database objects are smaller than –B, there is no need for the LRU algorithm. This can result in major performance improvements for small, but very active, tables. proutil dbname –C enableB2 areaname Table and Index level selection is for Type 2 only! 45

46 Conclusion Always use Type 2 storage areas. Define your storage areas based on technical attributes of the data. Static analysis isn’t enough – you need to also monitor runtime behaviors. White Star Software has a great deal of experience in optimizing storage. We would be happy to engage with any customer that would like our help! 46

47 Thank You! 47

48 Questions? 48


Download ppt "Storage Optimization Strategies Techniques for configuring your Progress OpenEdge Database in order to minimize IO operations Tom Bascom, White Star Software."

Similar presentations


Ads by Google