Presentation is loading. Please wait.

Presentation is loading. Please wait.

Some More Database Performance Knobs North American PUG Challenge

Similar presentations


Presentation on theme: "Some More Database Performance Knobs North American PUG Challenge"— Presentation transcript:

1 Some More Database Performance Knobs North American PUG Challenge
Richard Banville Software Fellow OpenEdge Development

2 1 2 3 4 5 Agenda LRU (again) Networking: Message Capacity
Networking: Resource Usage 4 Index Rebuild 5 Summary

3 1 2 3 4 5 Agenda LRU (again) Networking: Message Capacity
Networking: Resource Usage 4 Index Rebuild 5 Summary

4 LRU (again) Least Recent Most Recent RM Block T1 IX Block I1
Replacement policy of database buffer pool Maintains working set of data buffers Just a linked list – a shared data structure Changes made orderly by LRU Latch Replace buffer at LRU end with newly read block from disk Just a linked list meandering though the buffer pool

5 LRU (again) Least Recent Most Recent RM Block T1 IX Block I1
Pros – proficient block usage predictor Maintains high buffer pool hit ratio Cons – housekeeping costs Single threads access to buffer pool (even if for an instant) High activity, relatively high nap rate Managing LRU: Private read only buffers: -Bp –BpMax (not w/-lruskips until 10.2b07) Alternate buffer pool: –B2 New: -lruskips lru2skips Just a linked list meandering though the buffer pool

6 LRU (again) Least Recent Most Recent RM Block T1 IX Block I1
Find first T1. Just a linked list meandering though the buffer pool

7 LRU (again) Least Recent Most Recent RM Block T1 IX Block I1
Find first T1. RM Block T1 RM Block T3 IX Block I3 IX Block I1 RM Block T3 IX Block I3 IX Block I1 RM Block T1 Just a linked list meandering though the buffer pool

8 LRU (again) Least Recent Most Recent RM Block T3 IX Block I3
Find first T1. (again) RM Block T3 IX Block I3 RM Block T1 IX Block I1 RM Block T3 IX Block I3 IX Block I1 RM Block T1 Just a linked list meandering though the buffer pool What about … For each T1: end. For each w/many tables. For each w/many tables, many users.

9 Location, location, location
Least Recent Most Recent With –B 1,000,000 What does it take to evict from the buffer pool? What does it take to go from MRU to LRU? Do we need MRU on EACH access then? I think not.

10 Improving Concurrency
Least Recent Most Recent -lruskips <n> LRU and LFU combined Small numbers make a BIG difference Monitor OS Read I/Os and LRU latch contention Adjust online via _Startup. _Startup-LRU-Skips VST field Adjust online via promon R&D -> 4. Administrative Functions ... -> 4. Adjust LRU force skips

11 Performance – 10.2b06 & -lruskips
Re-iterate - read performance improvement for high volume/contention no-lock read situation Performance starts to degrade at about 140 users. ~39% # Users

12 Performance – 10.2b06 & -lruskips (250 users)
Note change in LRU latch waits vs buffer latch waits

13 Performance – 10.2b06 & -lruskips (250 users, big db)
Note focus now is on LRU and BHT (not buf)

14 Performance – 10.2b06 & -lruskips (big db)
~52% ~15% ~44% Re-iterate - read performance improvement for high volume/contention no-lock read situation Performance starts to degrade at about 140 users. # Users

15 Conclusions -lruskips can eliminate the LRU bottleneck
LRU isn’t the last bottleneck Overall improvement relative to other contention Data access limited by buffer level contention Table scans over small tables have more buffer contention than large tables Application changes can improve performance too!

16 1 2 3 4 5 Agenda LRU (again) Networking: Message Capacity
Networking: Resource Usage 4 Index Rebuild 5 Summary

17 Networking Control Philosophy: Throughput by keeping server busy without remote client waits! Process based control -Ma, -Mn, -Mi Controls the order users are assigned to servers -PendCondTime Resource based control -Mm <n> Maximum size of network message Client & server startup New tuning knobs – resource based control Alleviate excessive system CPU usage by network layer Control record data stuffed in a network message Applicable for “prefetch” queries Polling for Unix only Message stuffing supported for both windows and unixes

18 Networking – Prefetch Query
No-lock query with guaranteed forward motion or scrolling Multiple records stuffed into single network message Browsed static and preselected queries scrolling by default FOR EACH customer NO-LOCK: …. end. DO PRESELECT EACH customer NO-LOCK: …. end. Static queries are associated with class files only. Note that the third query MUST include the SCROLLING options define query cust-q for customer SCROLLING. open query cust-q FOR EACH customer NO-LOCK. repeat: get next cust-q. end.

19 Server Network Message Processing Loop
Polling for Unix only Message stuffing supported for both windows and unixes

20 Server Network Message Processing Loop
Polling for Unix only Message stuffing supported for both windows and unixes

21 Server Network Message Processing Loop
-NmsgWait What’s new: Polling for Unix only Message stuffing supported for both windows and unixes

22 Server Network Message Processing Loop
Polling for Unix only Message stuffing supported for both windows and unixes

23 Server Network Message Processing Loop
Polling for Unix only Message stuffing supported for both windows and unixes

24 Server Network Message Processing Loop
Poll() is system CPU intensive Polling for Unix only Message stuffing supported for both windows and Unixes 1ms = .001 mu => or 1000 mu = 1 ms 10 milliseconds to poll(0)! 10 microseconds to copy 1 record

25 Server Network Message Processing Loop
Potential side effects What’s new: -Nmsgwait – Network message wait time -prefetchPriority – give prefetching of records priority over polling for new requests -prefetchPriority

26 Server Network Message Processing Loop
Polling for Unix only Message stuffing supported for both windows and unixes

27 Process Waiting Network Message

28 Process Waiting Network Message
Non-prefetch query request

29 Process Waiting Network Message
1st record of a prefetch query request

30 Process Waiting Network Message
Secondary records of a prefetch query request: threshold not met default threshold is 16 records Default threshold is 16 records : 4096/16 = 256 bytes

31 Process Waiting Network Message
Secondary records of a prefetch query request: Client waiting Threshold met Send message

32 Process Waiting Network Message
What’s new: Increase network message fill rate: Improve TCP throughput Improve overall server performance Defaults have not changed Provides control for you Every deployment is different

33 Process Waiting Network Message
Disregard 1st record request check -prefetchDelay Threshold control – mention that first limit reached causes message to be sent. 0% disables it. Threshold control: # recs vs % full -prefetchNumRecs -prefetchFactor Potential side effects: Improved TCP/system performance Choppy behavior on remote client? NOTE: - -Mm size determines max Mm 4096 / 16 rec = 256 bytes

34 Altering Network Message Behavior
Promon Support (_Startup VST too!) Alter online R&D … 4. Administrative Functions … 7. Server Options … Server Options: 1. Server network message wait time: seconds 2. Delay first prefetch message: Enabled 3. Prefetch message fill percentage: % 4. Minimum records in prefetch message: 1000 5. Suspension queue poll priority: 7. Terminate a server Alter via _Startup vst

35 Performance – 10.2b06 & Networking changes
Re-iterate - read performance improvement for high volume/contention no-lock read situation Performance starts to degrade at about 140 users. ~212% ~32% # Users

36 1 2 3 4 5 Agenda LRU (again) Networking: Message Capacity
Networking: Resource Usage 4 Index Rebuild 5 Summary

37 Assumptions for best performance
Index data is segregated from table data Indexes & tables are in different storage areas You have enough disk space for sorting You understand the impact of CPU and memory consumption Process allowed to use available system resources

38 Index Rebuild Parameters - Overview
-TB sort block size (8K – 64K, note new limit) -datascanthreads # threads for data scan phase -TMB merge block size ( default -TB) -TF merge pool fraction of system memory (in %) -mergethreads # threads per concurrent sort group merging -threadnum # concurrent sort group merging -TM # merge buffers to merge each merge pass -rusage report system usage statistics -silent a bit quieter than before

39 Phases of Index Rebuild (“non-recoverable”)
Scan index data area start to finish I/O Bound with little CPU activity Eliminated with area truncate Index Scan Scan table data area start to finish (area at a time) Read records, build keys, insert to temp sort buffer Sort full temp file buffer blocks (write if > -TF) I/O Bound with CPU Activity Data Scan/ Key Build Sort-merge –TF and/or temp sort file CPU Bound with I/O Activity I/O eliminated if –TF large enough Sort-Merge Read –TF or temp sort file Insert keys into index Formats new clusters; May raise HWM I/O Bound with little CPU Activity Index Key Insertion

40 Phases of Index Rebuild
Scan index data area start to finish I/O Bound with little CPU activity Eliminated with area truncate Index Scan Area 9: Index scan (Type II) complete. Index area is scanned start to finish (single threaded) Block at a time with cluster hops Index blocks are put on free chain for the index Index Object is not deleted (to fix corrupt cluster or block chains) Order of operation: Blocks are read from disk, Blocks are re-formatted in memory Blocks are written to disk as –B is exhausted Causes I/O in other phases for block re-format Can be eliminated with manual area truncate where possible There is no start message for the index scan phase, only a complete message

41 Phases of Index Rebuild
Scan index data area start to finish I/O Bound with little CPU activity Eliminated with area truncate Index Scan Data Scan/ Key Build Scan table data area start to finish (area at a time) Read records, build keys, insert to temp sort buffer Sort full temp file buffer blocks (write if > -TF) I/O Bound with CPU Activity Processing area 8 : (11463) Start 4 threads for the area. (14536) Area 8: Multi-threaded record scan (Type II) complete. Table data area is scanned start to finish (multi-threaded if –datascanthreads) Each thread processes next block in area (with cluster hops) Database re-opened by each thread in R/O mode Ensure file handle ulimits set high enough

42 … Data Scan/Key Build Thread reads next data block in data area
DB RM Block Thread reads next data block in data area Record Extract next record from data block and build index key (sort order) Key Sort Block Sort Block Insert key into sort block (-TB 8K thru 64K) Sort/merge full sort block into merge block. (-TMB -TB thru 64K) Merge Block Write merge block to –TF, overflow to temp (-TMB sized I/O) -TF .srt1 .srt2

43 Sort Groups: -SG 3 (note 8 is minimum)
Each index assigned a particular sort group (hashed index #) 1) -T /usr1/richb/temp/ Index 1 SG 1 .srt1 Index 4 2) <dbname>.srt /usr1/richb/temp/ Record Index 2 SG 2 .srt2 3) <dbname>.srt /usr1/richb/temp/ /usr1/richb/temp/ Index 3 SG 3 .srt3 4) <dbname>.srt /usr1/richb/temp/ /usr2/richb/temp/ /usr3/richb/temp/ Each group has its own sort file Sort file location 1. Sort files in same directory (I/O contention) 4. Sort files in different location Ensure enough space

44 Phases of Index Rebuild
Scan index data area start to finish I/O Bound with little CPU activity Eliminated with area truncate Index Scan Scan table data area start to finish (area at a time) Read records, build keys, insert to temp sort buffer Sort full temp file buffer blocks (write if > -TF) I/O Bound with CPU Activity Data Scan/ Key Build Sort-merge –TF and/or temp sort file CPU Bound with I/O Activity I/O eliminated if –TF large enough Sort-Merge Sorting index group 3 Spawning 4 threads for merging of group 3. Sorting index group 3 complete.

45 Sort-Merge Phase Sort blocks in each sort group have been sorted and merged into a linked list of individual merge blocks stored in –TF and temp files. These merge blocks are further merged –TM# at a time to form new larger “runs” of sorted merge blocks. -TM# of these new “runs” are then merged to form even larger “runs” of sorted merge blocks. When there is only one very large “run” left, all the key entries in the sort group are in sorted order. Sorted! Note: even thought the illustration does not depict merges of –TM “runs”, they are always merged –TM at a time. It is just too difficult to depict given the screen real estate.

46 -threadnum vs -mergethreads
-TF .srt1 Thread 1 Merge phase group 1 -TF .srt2 Thread 2 Merge phase group 2 -TF .srt3

47 -threadnum vs -mergethreads
B-tree insertion occurs as soon as a sort group’s merge is completed. -TF .srt1 Thread 0 begins b-tree insertion concurrently. Thread 0 -TF .srt2 Thread 2 Merge phase group 2 -TF .srt3 Thread 1 Merge phase group 3

48 -threadnum vs -mergethreads
-threadnum 2 –mergethreads 3 Merge threads merge successive “runs” of merge blocks concurrently. Thread 3 -TF .srt1 Thread 1 Thread 4 Merge phase group 1 Thread 5 -TF .srt2 Thread 6 Thread 2 Merge phase group 2 Thread 7 Thread 8 -TF .srt3 Note: 8 actively running threads

49 -threadnum vs -mergethreads
-threadnum 2 –mergethreads 3 -TF .srt1 -TF .srt2 Thread 6 Thread 2 Merge phase group 2 Thread 7 Thread 8 -TF .srt3 Thread 3 Thread 1 Merge phase group 3 Thread 4 Thread 5

50 -threadnum vs -mergethreads
-threadnum 2 –mergethreads 3 B-tree insertion occurs as soon as a sort group’s merge is completed. -TF .srt1 Thread 0 begins b-tree insertion concurrently. Thread 0 -TF .srt2 Thread 6 Thread 2 Merge phase group 2 Thread 7 Thread 8 -TF .srt3 Thread 3 Thread 1 Merge phase group 3 Thread 4 Thread 5 Note: 9 actively running threads

51 Phases of Index Rebuild
Scan index data area start to finish I/O Bound with little CPU activity Eliminated with area truncate Index Scan Scan table data area start to finish (area at a time) Read records, build keys, insert to temp sort buffer Sort full temp file buffer blocks (write if > -TF) I/O Bound with CPU Activity Data Scan/ Key Build Sort-merge –TF and/or temp sort file CPU Bound with I/O Activity I/O eliminated if –TF large enough Sort-Merge Index Key Insertion Read –TF or temp sort file Insert keys into index Formats new clusters; May raise HWM I/O Bound with little CPU Activity

52 Index Key Insertion Phase
Building index 11 (cust-num) of group 3 … Building of indexes in group 3 completed. Multi-threaded index sorting and building complete. Key entries from sorted merge blocks are inserted into b-tree Performed sequentially entry at a time, index at a time Leaf level insertion optimization (avoids b-tree scan) Leaf level written to disk as soon as full (since never revisited) Index B-tree Root Leaf Leaf Leaf Leaf level optimization internally referred to as “cxfast” insertion Write leaf when full DB

53 2085 Indexes were rebuilt. (11465) Index rebuild complete
2085 Indexes were rebuilt. (11465) Index rebuild complete. 0 error(s) encountered.

54 Index Rebuild - Tuning Truncate index only area if possible Parameters
.srt file Parameters -mergethreads: 2 or 4 and –threadnum 2 or 1 -datascanthreads: 1.5 * # CPUs -B 1024 –TF 80 (monitor physical memory paging) –TMB 64 –TB 64 –TM 32 –T: separate disk, RAM disk if not using -TF (no change) -rusage & -silent Double scan – how to avoid - removed

55 12 ½ hours  2 ½ hours 5X improvement!
Performance Numbers Elapsed Time 12 ½ hours  2 ½ hours 5X improvement! Cost of each phase (in secs)

56 1 2 3 4 5 Agenda LRU (again) Networking: Message Capacity
Networking: Resource Usage 4 Index Rebuild 5 Summary

57 Summary LRU Networking Index Rebuild Potential for a big win
Always room for improvement Us and you! Networking You now have more control With power comes responsibility Index Rebuild Big improvements if Your database is setup properly You provide system resources to index rebuild Hopefully you’ll never need it

58 ? Questions

59

60 Performance – 10.2b06 & -lruskips (250 users)
Latches/sec

61 Performance – 10.2b06 & -lruskips (250 users, big db)
Latches/sec

62 Networking – Promon Support (VSTs too!)
No Prefetch Prefetch, default settings Activity: Servers Total Messages received Messages sent Bytes received Bytes sent Records received Records sent Queries received Time slices Activity: Servers Total Messages received Messages sent Bytes received Bytes sent Records received Records sent Queries received Time slices Prefetch, 90% capacity Prefetch, 90% cap. & delay Activity: Servers Total Messages received Messages sent Bytes received Bytes sent Records received Records sent Queries received Time slices Activity: Servers Total Messages received Messages sent Bytes received Bytes sent Records received Records sent Queries received Time slices


Download ppt "Some More Database Performance Knobs North American PUG Challenge"

Similar presentations


Ads by Google