Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Exercise in Improving SAS Performance on Mainframe Processors

Similar presentations


Presentation on theme: "An Exercise in Improving SAS Performance on Mainframe Processors"— Presentation transcript:

1 An Exercise in Improving SAS Performance on Mainframe Processors
SAS BLKSIZE and BUFSIZE Options

2 Forward At the last KCASUG meeting, George Hurley presented “Customizing Your SAS Initialization II.” In this presentation, George suggested that it is possible to save CPU in SAS jobs by tuning the BUFSIZE parameter. With our current interest in saving CPU and stretching the life of mainframe equipment, I decided to investigate what kind of savings were possible in our environment.

3 Background In the 1990s and earlier, disk storage for mainframes consisted of a stack of 14” platters arranged in what was called a disk drive. There was a separate read/write head for each surface All read/write heads were aligned at the same relative position and moved together Disk drives were organized into tracks and cylinders. A track represented the data that could be accessed from one surface with one revolution of the disk A cylinder was all the tracks that could be accessed from the same relative location of the read/writes heads. Data was stored with gaps between records in a CKD format

4

5

6 Background 3390s were the final generation of IBM classical disk drives Each track could hold up to 56,664 bytes The largest size record was 32,767 bytes While records could be larger, records were rarely larger than 27,998 bytes This is the largest record size that allowed 2 records per track Record sizes approaching 27,998 bytes provided optimal use of disk storage on 3390 devices This is commonly referred to as a “half track” record size

7 Background When modern storage controllers started replacing classical mainframe storage, the storage controllers emulated classical storage devices, particularly the 3390 While data is actually stored in stripes with multiple layers of virtualization, access to the data still follows the protocol of classical mainframe storage

8 Mainframe SAS Files Two factors have the most influence on the performance of I/Os for SAS datasets BLKSIZE – the size of the block (physical record) BUFSIZE – the size of the storage buffer Should be a multiple of BLKSIZE

9 SAS BLKSIZE BLKSIZE Larger block sizes are more efficient
With smaller block sizes, there is additional overhead in SAS to manage each block SAS files can have any BLKSIZE up to 32,760 The optimal BLKSIZE for SAS files is 27,648 Largest “half-track” size for SAS files Provides optimal balance of performance and disk storage utilization

10 SAS BUFSIZE BUFSIZE When SAS schedules an I/O for a SAS dataset, it builds the I/O command to transfer as much data as will fit in the buffer as a single I/O command This saves the operating system overhead related to managing multiple I/Os SAS uses its own channel programming (EXCPs) for SAS files, not normal operating system access methods For example, with a BLKSIZE of 27,648 and a BUFSIZE of 110,592, SAS would build I/O commands to transfer 4 blocks with each I/O command

11 SAS BUFSIZE BUFSIZE Buffer sizes of between 110,592 and 221,184 tend to be fairly efficient MEMSIZE may need to be increased when BUFSIZE is increased

12 Controlled Tests Performed some controlled tests One controlled test
Wrote 250,000 records to a SAS file (each about 1.6K of data) In separate step, read the records (in a _NULL_ data step, SET the input to the file just created) Varied BLKSIZE and BUFSIZE in each run

13 Controlled Tests Tests showed that a BLKSIZE of 27,648 performed better than a BLKSIZE of 6,144 for similar buffer sizes A BLKSIZE of 6,144 was the old standard in our shop Tests also suggested limited improvements in CPU and run times with buffer sizes above 110,592 to 221,184 In fact, sometimes performance appeared to deteriorate with larger buffer sizes

14

15

16

17

18 Production Pilots Identified the jobs that were using the largest total amount of CPU Ran pilots on 2 of the top 5 jobs to explore potential benefits with real jobs Changed BKLSIZE from 6,144 to 27,648 Increased BUFSIZE to 221,184 Ran several parallel runs of the MXG job with various BLKSIZE and BUFSIZE (MXG is a common SAS-based mainframe tool to capture and manage mainframe performance data) Experimented with various block sizes Have not placed changes to MXG in production yet Rewrote one job in another language

19 Pilot Results Pilot results were quite favorable
Job using largest amount of CPU (runs many times each day) – see charts for Job 1 6% reduction in CPU 25% improvement in run time Job using 5th largest amount of CPU (runs many times each day) – see charts for Job 2 9% reduction in CPU 43% improvement in run time MXG (2nd largest user of CPU – runs once daily) 5% reduction in CPU ~ 10% improvement in run time

20

21

22

23

24 Production Implementation
Changed BLKSIZE to 27,648 Changed both CONFIG member and SAS PROC Changed BUFSIZE to 221,184 Changed CONFIG member Made changes to ensure jobs would not fail with memory issues MEMSIZE parameter removed from CONFIG Defaults to 0 (no limitation on memory) Changed REGION to 0M in SAS PROC Made mass change to production SAS jobs to remove REGION parameter overrides

25 Implementation Results
Measured results based on production jobs that ran daily Compared results on job / weekday basis For jobs that ran during the day: 10% average reduction in CPU Varied from no gain to 15-20% improvement 30% average improvement in run times Varied considerably from job to job For jobs that ran at night 3% reduction in CPU 10% improvement in run times

26 Issues and Opportunities
Many production jobs reuse same SAS files without ever deleting and recreating them BLKSIZE remains smaller size Many production jobs use their own customized SAS PROCs or CONFIG members Cannot easily take advantage of changes Will need to look for opportunities to tune these jobs later

27 Thinking Outside the Box
One very large SAS job runs daily Job would read million rows Sort data on 4 keys Summarize 32 columns using PROC UNIVARIATE Rewrote job in another language Took advantage of partial natural order of data and used hashing algorithm to organize data Initial level summary done in summary program Summarized data was then input to SAS

28 Changes in Rewritten Job
Reduced CPU 95% Improved run time 97% It is worth noting that I could find only two large SAS jobs that could take advantage of this technique. All other SAS jobs that I looked at were far too complex to consider doing this.

29


Download ppt "An Exercise in Improving SAS Performance on Mainframe Processors"

Similar presentations


Ads by Google