Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tape and ProtecTIER Update for IBM i folks

Similar presentations


Presentation on theme: "Tape and ProtecTIER Update for IBM i folks"— Presentation transcript:

1 Tape and ProtecTIER Update for IBM i folks
Updated December 10, 2012 This presentation is stored on IBM Techdocs at the following url: Nancy Roper IBM Americas Advanced Technical Skills Run it in screenshow mode to see the animation. Note that there are a number of hidden charts.

2 Agenda Tape Product Line + LTO6 Virtual Tape + ProtecTIER
New BRMS Report for Tape Planning Gen 2 IOPless Tape Driver Tape Drive Sharing Tape Adapters Tape Info APARS Optimizing your Tape Performance BRMS Parallel Save PTF Tape Encryption Troubleshooting SAN Design Highlights Optical Replacement Options

3 Tape Product Line for IBM i

4 Current IBM Tape Product Line for IBM i
TS3500 LTO Family TS3310 Enterprise Family TS3200 TS3100 TS3400 TS1140 TS3400 withdrawn from marketing Sept 2010 TS2900 TS2360 TS2260 Low cost High capacity Fast streaming operations High performance Industrial strength Fast streaming and start/stop operations

5 LTO Ultrium5 Tape Family
Min LTO5 / LTO6 Support IOPless attach only V6R1M1 + POWER6 LTO Ultrium5 Tape Family Although SAS drives have 2 ports, they are only supported for single system attach TS3500 For newer drives, TS3500 may require ALMS and enhanced node cards. Check with your IBM team LTO4 onwards offers HH fibre drives for the TS3100 and TS3200 TS3310 TS3200 TS2360 TS2900 TS3100 TS2260 Libraries have max 15,000 “elements” per library LPAR on i TS2260 TS2360 TS2900 TS3100 TS3200 TS3310 TS3500 Machine Name 3580-H6S 3580-S6X 3572 3573-L2U 3573-L4U 3576 3584 Max # Cartridges 1 9 23+1 45+3 396 >6200 Partition Capable No Yes (w HH) Yes LVD SCSI Drives FH (1) Yes (1) Yes (2) No (not for LTO4 onwards) No (not for LTO3 onwards) SAS Drives HH = half high, FH = full high HH (1) HH (2) HH (4) FH (2) FH (18) Fibre Drives 8 Gbit LME Encryption (w Transparent LTO Encr Feat) (+ fc 5901) w SAS/fibre (+ fc 5900) w fibre (+ fc 1640)

6 Enterprise Tape Family
Min TS1130 Support V5R3 with IOP’d fibre cards V6R1 + POWER6 for IOPless fibre cards Min TS1140 Support IOPless attach only V6R1M1 + POWER6 Enterprise Tape Family 3590 drives are not supported on POWER7 or higher Drive based Encryption is supported for TS1120 drives onwards in TS3400, TS3500, 3494 libraries, but not standalone drives does not offer TS1140. TS3400 New in Spring 2011 TS3500 Withdrawn in Sept 2010 TS1140 Standalone Drive TS1140 Standalone TS3400 TS3500 Machine Name 3592-E07 3577-L5U 3584 Max # drives 1 2 192 Max # Cartridges (Max 5000 per Library LPAR) 18 >6200 Partition Capable Library Yes LVD Drives No No (for TS11xx) Fibre Drives 8 Gbit 4 Gbit For TS1120 / 1130 4 Gbit (for TS1120/30) 8 Gbit (for TS1140) Library Managed Encryption Capable

7 LTO6 Tape Drives

8 LTO6 is Here! Generically, LTO1-5 anticipate 2:1 compaction, and LTO6-8 anticipate 2.5:1 compaction due to larger compression history buffer. IBM i typically gets 3:1 or more TS3500 LTO6 Drives Announced: Wed Oct 3, 2012 Generally Available: Fri Nov 9, 2012 Other External LTO6 Drives Announced: Tues Nov 6, 2012 Generally Available: Fri Dec 7, 2012 Integrated Drives & Storage Enclosure Drives 2013 Review % increases in tape length, tracks, and linear density to increase capacity Minimum Library Firmware Levels: TS TS3100/TS3200 B.50 TS G.GS003 TS3500 C070 C080 LTO6 Media Announced: Mon Dec 3, 2012 Generally Available: Fri Dec 7, 2012 © 2012 IBM Corporation LTO5/6 are supported on IBM i on POWER6 and IBM i or higher, with IOPless adapter cards. SSIC is up to date for GA’d drives. For encryption, TKLM (vs EKM) is needed for LTO5 onwards.

9 LTO Media Details Key: = Not compatible Tape Drive Operation
Generation 1 cartridge Generation 2 cartridge Generation 3 Generation 4 Generation 5 Generation 6 Generation 1 Read 15 MB/sec Write 15 MB/sec+ 35 MB/sec Full high 35 MB/sec+ 80 MB/sec Half High 60 MB/sec 80 MB/sec+ 120 MB/sec 140 MB/sec 160 MB/sec 100GB Native 200GB Native 400GB Native 800GB Native 1.5 TB Native 2.5 TB Native

10 LTO6 Half High Performance on IBM i
Save Operations Fibre Attached LTO4, LTO5, TS1140, LTO6 Restore Operations Fibre Attached LTO4, LTO5, TS1140, LTO6 LTO LTO TS LTO6 LTO LTO TS LTO6 LTO6 Save Usermix 86 MB/sec Large File 500 MB/sec LTO6 Restore Usermix 80 MB/sec Large File 500 MB/sec Notice that LTO6 is almost as fast as TS1140 POWER7 and 8 Gbit fibre is needed to reach the largefile speeds

11 LTO6 Performance on IBM i
TS1140 LTO6 9117-MMA 16 Way With 200 DASD 9179-MHC With 720 DASD On 574E IOAs EXP 12 3Gb SAS Drawers 9179-MHC With 720 DASD on 574E IOAs EXP 12 3Gb SAS Drawers 9179-MHD With 288 DASD on 57B5 IOAs Gb SAS Drawers B 4Gb IOA D 8Gb IOA 3592 E07 577D 8Gb IOA D 8Gb IOA SAVE 1 GB Source File 32 25 26 28 1 Directory Many Objects 55 46.5 77 110 Many Directories Many Objects 40 44.5 56 67 12 GB User Mix 234 220 241 312 Domino (offline) 575 605 937 740 64 GB Large File 859 1366 1814 1720 320 GB Large File 890 1475 1861 1810 RESTORE 50 48 41 59 78.5 78 112 33.5 29 42 210 225 227 286 650 803 1273 1038 837 1307.5 1873 1760 1327 1895 Speeds shown are in GB/hr POWER7 and 8 Gbit fibre is needed to reach the largefile speeds shown

12 Virtual Tape

13 Virtual Tape Alternatives
IBM i Integrated Virtual Tape ProtecTIER V5R4 onwards Part of Operating System Good performance with enough disk arms No turnkey remote replication V5R4 onwards External Virtual Tape TS7620, TS7650 Good performance with appropriate disk Strong remote replication

14 What does ProtecTIER do?
Ohio ProtecTIER IBM i TS3500 Optional duplication to physical tape (at local or remote site) IP Replication Minimized bandwidth since data is de-dup’d before sending New York IBM i ProtecTIER Virtual Tapes C A B What is DeDuplication? Disk C A B Local Saves to Virtual Tape with De-dup © 2012 IBM Corporation 14 14

15 Active-Active Cluster
Capacity and Performance IBM TS7600 ProtecTIER® Deduplication Family Gateway TS7650 Highest Performance Largest Capacity High Availability SMB Appliance TS7620 High Performance High Capacity Flexible Storage Good Performance Entry Capacity Very Low cost Active-Active Cluster Up to 2500 MB/sec save Up to 3200 MB/sec restore 1 PB useable Single Node Up to 1600 MB/sec save Up to 2000 MB/sec restore 1 PB useable Single Node Up to 150 MB/sec 11.8 TB (11 TiB) useable Nominal Space Available = “useable” space * HyperFactor Ratio Single Node Up to 150 MB/sec 5.9 TB (5.5 TiB) useable 1 TB = decimal TB = 1,000,000,000,000 bytes or 1,000 GB (i.e. 10^12 bytes) 1 TiB = binary TB = 1,099,511,627,776 bytes or 1,024 GiB (i.e. 2^40 bytes) © 2012 IBM Corporation 15 15

16 Where does it Fit – Generally and on IBM i
Small Backups don’t fill a tape Small Servers can’t optimize a tape drive Writing Waiting Waiting Waiting Virtual tape can provide multiple virtual drives Virtual tape can make virtual volumes of any size Nice with VIOS for IBM i Less Important for IBM i Tapes are Hard to Manage Offsite Shipments are Costly and a Bother Virtual tape keeps all the volumes inside the device Virtual tape can transmit them to a remote site Very Interesting for IBM i Customers Good for IBM i

17 Virtual Tape on IBM i – Important Points
Overall Speed and Single Stream Speed Backup Scheduling Virtual Tape Devices shine when they can run a large number of medium-speed backup streams. IBM i customers sometimes need a small number of very fast streams. Be sure to understand the single stream performance provided to make sure your Virtual Tape Device will meet your needs LPAR 11 pm 11:30 pm Mid-nite 12:30 am 1 am IBM i 01 IBM i 02 IBM i 03 IBM i 04 Total MB/Sec 20 160 200 Single Stream performance depends on the VTL disk type/amount IBM i ProtecTIER Gateway full box save capacity is 2500 MB/sec with 2 DD5 nodes ProtecTIER MB/sec per stream Draw a Backup Gantt Chart to check the MB/sec and # streams at your peak Current Technology Physical Drives run at MB/sec per stream (umix / largefile) Non-Infinite Resources Although virtual tape is flexible, remember the resources aren’t infinite © 2012 IBM Corporation

18 ProtecTIER Single Stream Performance on IBM i
SM2 Appliance PT DD4 Gateway with V7000 or DS8000 1 stream up to MB/sec 3 streams up to MB/sec 6 streams up to MB/sec 12 steams up to MB/sec 18 streams up to 52 MB/sec Plan for up to 110 MB/sec Per stream with 1-2 streams at once From real-life customers DD5 Gateway customers are exceeding 200 MB/sec on IBM i AP1 Appliance This device is withdrawn now From an extensive IBM i customer POC in July 2011 DD3: MB/sec DD4: 60 MB/sec Per stream with 3-5 streams at once Recall that user mix data will max out at 60 MB/sec regardless of the single stream speed shown. Large file data is needed for the above speeds. Estimate

19 ProtecTIER on IBM i – Support and Testing
Supported with: IOP’d fibre cards from V5R4 onwards (2765, 5704, 5761) IOPless fibre cards from IBM i onwards BRMS is strongly recommended Tested with the same COMPREHENSIVE Test Buckets used for regular tape drives IBM ProtecTIER is the ONLY External Virtual Tape product that is tested and supported by IBM Rochester

20 Helpful Websites for IBM i and ProtecTIER:
List of ProtecTIER and Tape Resources for IBM i Customers: Partners: IBMers: BRMS Wiki IBM i Tape & ProtecTIER Wiki BRMS © 2012 IBM Corporation

21 Helpful Websites for IBM i and ProtecTIER:
List of ProtecTIER and Tape Resources for IBM i Customers: Partners: IBMers: IBM i Tape & ProtecTIER Wiki BRMS Wiki ProtecTIER Releases tested on IBM i Native and VIOS/NPIV Tape Attachment Support information Save / Restore / Tape related PTF Information (eg Large Library PTF) Group PTF Information etc BRMS Release Enhancements BRMS “Enhancement PTFs” New BRMS “Enterprise” Function BRMS Course Dates BRMS Group PTF #’s / dates Save / Restore Group PTF #’s / Dates Troubleshooting Docs – DMPBRM , QTADMPDV etc BRMS © 2012 IBM Corporation

22 IBM i / ProtecTIER Enhancement PTFs
DUPMEDBRM Compaction PTF (June 2010) Remote Dups – moving tapes marked for dup (2011) BRMS Parallel Save Performance (July 2011) ProtecTIER Initialize on Expiry (June 2012) PRTRPTBRM Report (June 2012) 15,000 Slot Library PTF (July 2012) 256 Drives in a Library (IOPless) (Fall 2012) See Appendix for Details and PTF #’s

23 ProtecTIER on IBM i – Designing / Sizing
Get the ProtecTIER on IBM i Data Collection Spreadsheet Simple Environment: 1-2 hours of work Complex Environment: Several days of work Use PRTRPTBRM *CTLGRPSTAT to gather data for the spreadsheet Build a Repository Sizing Spreadsheet Build a Backup Schedule Gantt Chart (to figure out the peak MB/sec) LPAR GB in Save Iterations Kept GB in repository IBM i 01 200 GB 3 600 IBM i 02 350 GB 7 2450 IBM i 03 100 GB 300 IBM i 04 575 GB 12 6900 Total 10250 LPAR 11 pm 11:30 pm midnite 12:30 am 1 am IBM i 01 IBM i 02 IBM i 03 IBM i 04 Total MB/Sec 20 160 200 60 60 60 60 20 20 80 80 80 80 80 60 60 Then ask the ProtecTIER FTSS to tell you which model of ProtecTIER you need

24 PRTRPTBRM *ctlgrpstat
NEW BRMS Command PRTRPTBRM *ctlgrpstat

25 Brand New! Tape / ProtecTIER Sizing Report from BRMS
Excellent for: Monitoring / Analyzing ongoing Backup Performance Sizing New Tape / ProtecTIER Environments Once the PTF is loaded, BRMS will start tracking saves: try to apply the PTF several weeks before you need your first reports Backups that run via Control Group (STRBKUBRM) will have full information Backups that run via SAVxxxBRM will be bundled in the line labelled *NONE. Adjust the report times to try to isolate each save Command details are on the next page Available via the June 2012 BRMS quarterly PTF June 2012 BRMS PTF V5R4 SI46335 (Partial Support – see note) V6R1 SI46339 IBM i 7.1 SI46340 Note: Reports for a V5R4 system must be created on a V6R1 or IBM i 7.1 system that is in the same BRMS network. Use the “From System” parameter Use alongside the IBM i ProtecTIER Data Gathering Spreadsheet available at:

26 ProtecTIER / Tape Sizing Report from BRMS – The Command
The first time you run the report, change the first parameter to *CTLGRPSTAT and take the defaults for everything else If there is a *NONE line at the end of the report, experiment with the start/end times to isolate the individual non-control group tape activity If you just want to see one save, type the control group name here If you have a V5R4 system, run the command from another system in the BRMS Network using the “From System” parameter

27 Tape / ProtecTIER Sizing Report from BRMS – Using It
System Name Failed Saves For a sizing, figure out the typical start time, duration, save size and speed of each save type There have been 3 different tape operations that are not control group saves. Eg SAVxxxBRM, SAVxxx to a BRMS-enrolled tape, dups of saves that were not done via a control group. Their stats are all bundled in the first *NONE line for each date, and their volsers are on subsequent lines. Use your knowledge of the system to re-run the report with various start/end times to try to isolate the data for each operation.

28 Gen 2 IOPless Tape Driver

29 Gen 2 IOPless Tape Driver
New and Improved For customers using IOPless SAS and fibre drives All customers are encouraged to get it More robust than original IOPless tape driver How to get it on IBM i 6.1.1: PTF MF50093 or supercede Read II14355, II14526, II14584, II14615 for related PTFs etc How to get it on IBM i 7.1 Included with the base OS Read Info Center for configuration changes needed at install Library and drive resource names may change Control path rules have changed – 2 port fibre card now needs to be able to see a control path drive on each port Control path failover available with IBM i 7.1 and current PTFs Disparate drives can share a library *and* a fibre card at IBM i 7.1 (this does NOT apply to SAS drives) GET IT ! New and Improved New and Improved Note: If you have a lot of drives of one type (eg LTO4) in a library such that you need more than 1 fibre card to attach them all, then continue to choose either IOP’d or IOPless fibre cards for them, but not a mixture. This is related to “drive pooling”. For details, see the “SAN Design for IBM i” presentation on Techdocs (Google on PRS2997) GET IT ! © 2012 IBM Corporation

30 Drive Sharing

31 Tape Drive Sharing on i Sharing via SAN Sharing via LPAR IBM i LTO4
Tape Library TAPMLB01 Sharing via LPAR IBM i The HMC or the LPAR Toolkit can help automate card movements and minimize the risk of moving the wrong card LTO4 LTO4 TAP02 TAP03 Tape Library TAPMLB01

32 Tape Virtualized via VIOS

33 Tape Virtualized Via VIOS (Great for Blades!)
VIOS Partition owned SAS Tape Devices VIOS NPIV for Fibre Libraries VIOS LPAR 1 LPAR 2 Prod Data SAS VIOS LPAR 1 LPAR 2 Prod Data Fibre NPIV capable SAN Switch For a list of supported drives, Google on II14584 (small standalone drives) For a list of supported libraries, Google on II14526 (fibre libraries) VIOS-attached tape device is virtualized directly to the LPARs Use IVM / HMC GUI to assign the SAS card / drive to the LPARs as needed (manual) Resulting save can be restored on any LTO4 drive, not just VIOS-attached Big improvement for blades, both for backup + migration See Tape Wiki for details re code levels reqd NPIV = N-Port ID Virtualization Virtualizes the tape fibre port to be shared concurrently by all attached LPARs Supported on 5735, 5276 (low profile), 5729 (4 port, NPIV only) + Blade equivalent Useful for environments with a lot of small LPARs that don’t justify a dedicated fibre card Supported on selected blades

34 IBM i Hosting IBM i Sharing Tape Drives

35 . . . IBM i hosting IBM i – Client Virtualized Tape Devices
Host “Server” LPAR Minimum IBM i 7.1 TR2 IBM i Guest “Client” LPAR Minimum IBM i 6.1.1 IBM i Guest “Client” LPAR Minimum IBM i 6.1.1 IBM i Guest “Client” LPAR Minimum IBM i 6.1.1 . . . When IBM i is hosted by IBM i ….. Previously, tape drive had to be “moved” among the LPARs via LPAR sharing With releases shown above, guest partitions can all “see” the tape drive and share it similar to SAN-attached libraries Small sequential-mode drives only (see detailed list of drives on next page) If you need library function, then you need VIOS / NPIV instead © 2012 IBM Corporation

36 Tape Adapter Cards

37 IOP and IOPless explained
Other Platforms IBM i Originally – “IOP’d” Disk HBA Tape HBA Comms HBA Twinax IOA Ethernet IOA Tape IOA IOP IOP HBA – Host Bus Adapter IOP – Input / Output Processor IOA – Input / Output Adapter POWER Systems Merger Phase 1: AIX and IBM i features Phase 2: Unified Features IBM i After Merger with AIX – “IOPless” Twinax IOA Ethernet IOA Tape IOA

38 Tape Adapter Cards (IOA’s)
For Disk, fc 5749 IOPless fibre cards can now attach to POWER5/5+ with V6R1, but tape still needs POWER6 Tape Adapter Cards (IOA’s) LVD SCSI libraries are end-of-life. They do not attach to POWER7 LVD SCSI Cards Fibre Cards with IOPs fc 5702 / 5712 fc 5736 / 5806 (IOP’d) fc 5775 / 5736 (IOPless) fc 2765 – 100 MB/sec fc 5704 – 200 MB/sec fc 5761 – 400 MB/sec (2 ports on each) (1 port on each) Not supported on POWER6 onwards Not supported on POWER7 onwards 140 MB/sec per port (max 250 MB/sec per drive) Bootable Not Bootable. Use Alt-install Try to pick IOPless cards since they will go forward to POWER7 IOPless SAS Cards IOPless Fibre Cards fc 5912: PCI-X fc 5901: PCI-e fc 5278: PCI-e – low profile 5901 (2 ports on each) (2 ports on each) fc 5749: PCI-X – 400 MB/sec fc 5774: PCI-e – 400 MB/sec fc 5735: PCI-e – 800 MB/sec POWER6 + i6.1 onwards except TS2240 on fc 5912 can use V5R4M5 Support NPIV for sharing fc 5273: PCI-e – low profile 5774 fc 5276: PCI-e - low profile 5735 fc 5708: PCI-e – FCoE fc 5729: PCI-e – 4x800 MB/sec POWER6 + i6.1 onwards for IOPless tape NPIV only 320 MB/sec per port Bootable © 2012 IBM Corporation

39 Tape Attachment Information

40 SSIC + Interim IBM i Tape Support Matrix
Official tool to look up supported combinations of Server, Adapter, Switch, Disk, Tape + Firmware IBM i is included in SSIC for POWER5, V5R4M0 and TSxxxx drives onwards As a check, also use the Interop Spreadsheet (next pg). SSIC URL:

41 IBM i Interim Tape Interop Spreadsheet – TSxxxx onwards
This sheet was the input to the System Storage Interop Center (SSIC) tool for IBM i POWER7 + IBM i 7.1 version is now ready!

42 IBM i Tape Support Matrix – Server + IOA Definitions
Maps server models to column titles in Interop Spreadsheet Explains LVD SCSI feature code #’s including the fc 5736 collision with System P

43 IBM i Tape Support Matrix – Bonus LTO3/4 Guide

44 Tape Drive Model Characteristics
To find this document, Google on IBM It’s in the IBM I SupportLine Knowledgebase

45 Tape Related Info APARs

46 Information APARs for Tape
Find them via GOOGLE or the Tape Wiki APARs II14355 II14615 Tape Drives supported on IOPless adapter cards Tape Drives supported for IBM i hosting IBM i II14584 II14526 Tape Drives supported for VIOS / SAS connection Tape Drives supported on VIOS / NPIV fibre connections © 2012 IBM Corporation

47 Shortening your Backup Window

48 Overview: Shortening your Backup Window
Buses Integrated Virtual Tape TS76xx Tape Disk Tape CPU Tape Optimize Hardware Ensure the current backup isn’t bottlenecked Invest in faster hardware Concurrent / Parallel saves Virtual Tape Guiding Principles Keep it Simple Manage with BRMS Tape Tape Primary System SavChgObj Selective Restructure Saves while Users are Online Use Save While Active Use Domino Online Saves Second System for Saves External Disk FlashCopy Run Backup on HA System Save Less Data

49 Optimizing your Hardware

50 High End Tape Performance Benchmarks
Legend Source File IFS 1:m User Mix IFS m:m Large File Domino Offline Linux NWS Offline High End Tape Performance Benchmarks See Chapter 15 of the Performance Capabilities Reference manual for benchmark details. This publication can be found at the following url: LTO Family 359x Family Disk 1700 GB/hr 1420 GB/hr Notice Usermix speed is the same from LTO3 onwards Note that Usermix and Largefile speeds are the same on LTO3/LTO4 LTO5 largefile increases to 409 MB/sec on POWER7 / 8Gbit TS1140 Largefile – 525 MB/sec (1890 GB/hr!) (on POWER7 w fc 5735 fibre cards) 890 GB/hr 890 GB/hr LTO4 LVD SCSI tops out at 140 MB/sec (500 GB/hr) (ie LTO3 speeds) 525 GB/hr Note: the 1st Savefile & Virtual Tape Benchmarks used 924 arms in the Virtual Tape ASP. Smaller environments should review the arm-based benchmarks on the next page 350 GB/hr 365 GB/hr 142 GB/hr

51 Compare the Speed to the Benchmarks for your Current Drive
Current Benchmarks are in the IBM i Performance Capabilities Reference, in the Save/Restore Chapter Older Benchmarks are summarized in Nancy’s Tape Performance Chart:

52 How to estimate the performance of a new drive
How do you figure out how fast your current drive is running? To Estimate the Performance of a new drive: Tape Drive User Mix Large File LTO2 43 MB/sec 100 MB/sec LTO3 60 MB/sec 140 MB/sec LTO4 65 MB/sec 247 MB/sec 3592J 51 MB/sec 104 MB/sec TS1120 250 MB/sec Method #1: Time your Full save Use WRKSYSSTS to find out how much data there is on your system Use Joblog or BRMS Log to find out how long your full save ran Divide to get GB/hr. Divide by 3.6 to get MB/sec Eg User Mix Eg LTO4 – 65 MB/sec If it’s slower than you expect, investigate whether you might have a bottleneck somewhere before continuing (see next page) Backup Statistics Report Ctlg Start/ End Dura-tion Total GB MB/ sec CtlgA 3am : CtlgB 6am : Method #2: Use the New BRMS Report Figure out how fast the current drive is running Match it to the benchmarks for the current drive Read off the performance for the new drive for that benchmark Eg LTO2 at 40 MB/sec

53 Parallel and Concurrent
Saves

54 See next page for details
Multi-streamed Saves Check out the June 2006 issue of the COMMON Connect magazine for an article about Parallel Save and Restore Parallel Saves Concurrent Saves Save Job IBM i IBM i Save Job IBM i carves backup into multiple streams 1 job for all save streams together Overhead is approx 1 drive in 8 BRMS is strongly recommended Beware of recovery considerations User splits backup into multiple streams 1 job per save stream Least Overhead Get the June 2011 BRMS PTF for a possible BIG performance improvement (for all releases from V5R4 to V7R1) See next page for details

55 Parallel Save Performance Increase – June 2011 BRMS PTF
From inception, BRMS has inadvertently used small blocks vs large blocks for Parallel Saves, both Parallel-Parallel and Parallel-Serial The June BRMS PTF includes a fix to change these saves to use large blocks: V5R4 SI42923 V6R1 SI42924 IBM i 7.1 SI42925 This can make a SIGNIFICANT difference to performance Customers who have tried Parallel Saves in the past and concluded they were not helpful should go back and retry them once the PTF is applied Parallel saves done outside BRMS were already using large blocks and hence were already receiving the improved performance

56 - Drive based Encryption with TKLM - BRMS Software based Encryption
Backup Encryption Alternatives - Drive based Encryption with TKLM - BRMS Software based Encryption

57 Comparison: Tape Drive vs BRMS SW Based Encryption
Tape Drive Hardware-based Encryption BRMS Software-based Encryption IBM i Encrypted Backup Enablement Keys BRMS Advanced Feature Fibre or SAS LTO4/5 or TS1120 / 30/40 in a library BRMS Control Group LibA encrypted LibB unencrypted Any tape drive or library TKLM TKLM V5R3 onwards (or min release req’d for tape drive attach) V6R1 onwards Considerations Needs fibre or SAS LTO4/5 or fibre TS1120/TS1130/TS1140 in a library Encrypts whole cartridges Advantages No impact on CPU utilization Max 1% performance degradation No increase in media required All objects can be encrypted Advantages Any type of tape drive Mix/Match encryption on 1 cartridge Considerations Significant increase in CPU utilization Significant Performance Degradation May take up to 3* as much media Certain system libraries can’t be encrypted IBM i Encrypted Backup Enablement – 57xx-SS1 option 44 – is also req’d

58 Drive based Encryption with TKLM
Backup Encryption Drive based Encryption with TKLM

59 Encryption Methods IBM i Application-Managed (AME)
Note: Brocade Encrypting switches are not supported for IBM i saves Application-Managed (AME) (TSM Only) System-Managed (SME) z/OS, AIX, Solaris Windows & Linux Encryption Key Manager Policy – Who determines what gets encrypted and what does not get encyrpted. For data that gets encrypted, who determines which public key (Key Label) is used. Key Management – Who generates the data keys. Who manages and controls the key encrypting keys. AME/SME/LME is specified at the Tape Drive. This drive attribute can be updated by the 3584 Web Interface for 3584 environments. It is updated by the CE at the tape drive for 3494, Silo, and Standalone environments. Conceptually, three different methods of encryption will be supported. Application managed, System Managed and Library Managed. The application layer initiates data transfer for tape storage (e.g. TSM). We are planning on providing facilities for an application to generate and provide keys to the TS1120 tape drive. In this case, the application would be responsible for the creation, storage and management of the cartridge data keys. We are defining the system layer as everything between the application and the tape drives (e.g. the OS, DFSMS, device drivers, and FICON/ESCON controllers). We are planning to provide a new key manager software program that will work in conjunction with the system layer to provide keys based on the policies established for encryption of tape data. In addition, in open systems environments, we are planning to provide Library Managed encryption. The library layer is the enclosure for the tape drives and we use an out-of-band interface to each tape drive to transmit the data encryption request and keys. Policies may be implemented based on cartridge volume serial numbers, logical libraries, or drives. We are using this nomenclature; application managed, system manager and library managed encryption across IBM development teams, to assist us in communicating with each other and in communicating customer requirements. Let’s turn to the next page to discuss some of the additional features of this approach to the IBM encryption solution. Library-Managed (LME)__ TS3500, TS3400, TS3310___ TS3200, TS3100, TS2900, 3494___ IBM i

60 IBM i Tape Encryption on IBM Tape Drives
How does it Work? IBM i sends the backup to the tape library If the drive / library has encryption turned on, then the library gets the keys from the TKLM The drive/library write the save BRMS is recommended to keep encrypted / non-encrypted tapes separate Library can be used in sequential mode if desired – encryption will still work TKLM Server IBM i TKLM Server LTO4/5/6 or TS1120/30/40 Drives in a Tape Library Components Encryption Capable Tape Drive(s) – fibre TS1120/TS1130/TS1140 or fibre/SAS LTO4/LTO5/LTO6 A Tape Library – TS2900/3100/3200/3310, TS3400, TS3500, 3494 Multiple Key Managers (TKLMs) Suitable Drive / Library / TKLM at DR Site to restore

61 Comparison of Solution Components for LTO4/5 vs TS1120/30
LTO4 / LTO5 / LTO6 TS1120 / TS1130 / TS1140 Note: TS1120/30/40 use a special media density for encrypted tapes called FMT3592A2E/A3E/A4E. LTO does not have a special density. Encryption Capable Drive Fibre or SAS LTO4/5/6 drives only (*NOT* LVD SCSI drives) Fibre TS1120/30/40 (3592E) drives with fc 5592 ($5K) or fc 9592 (nc) Tape Library TS2900, TS3100, TS3200, TS3310, TS3500 TS3400 or TS3500 or 3494 Transparent LTO Encryption feature for LME and SME TS2900: fc 5901 ($1,250 US) TS3100/TS3200: fc 5900 ($2,500 US) TS3310: fc ($5,000 US) TS3500: fc ($12,000 US) Not required (function is included in drive price) Media LTO4/5/6 media only TS1120/30/40 Media Key Manager Multiple TKLMs (SW + HW to run it on)

62 Tivoli Key Lifecycle Manager (TKLM)
What is TKLM? Follow-on to Encryption Key Manager (EKM) Stores / Serves keys for Encryption: Tape: TS1120/30/40, LTO4/5/6 Disk: DS8000 MUCH more user-friendly than EKM Although we can’t RUN TKLM on IBM i, we can use TKLM on another platform to encrypt our IBM i saves What Platforms does it run on? Windows Server 2003 & 2008 AIX 5.3, AIX 6.1 or later Red Hat Enterprise Linux 4 & 5 SuSE Linux Enterprise Server 9 &10 Solaris 9&10 SPARC z/OS Version 1.9, 1.10, 1.11 IBM i customers usually run their TKLM on Windows because: They typically have good skill on Windows It avoids the temptation to run TKLM on a system with a production application and accidentally encrypt the keys (this would make it impossible to recover due to the chicken / egg problem) Easy to load up a spare TKLM and store it offsite Easy to acquire hardware to re-build the TKLM after a big disaster Faster to restore / rebuild the key store on Windows vs a larger platform

63 TKLM: Pricing and Licensing
A single TKLM server license with 10 tape drive licenses could be used as follows (simultaneously): Load it onto TKLM A and have both tape libraries point at it as their main Key Manager with 10 drives in the drive table Load it onto TKLM B and have both libraries point at it as their backup Key Manager. TKLM B will be used automatically if TKLM A is unavailable Load it onto TKLM C and TKLM D to use in case of a disaster. The Libraries will have to be switched to point at these key managers when needed Load it onto 2 laptops to store offsite in case of a serious disaster Use TKLM C and TKLM D 2-3 times a year for 2-3 days each time for disaster recovery testing, even while TKLM A and TKLM B are serving keys If the secondary site is a cold site (eg drives are only used in a disaster), then 6 drive licenses are enough TKLM no longer offers volume discounts. Check the announcement letter for details If the customer would like to run each tape library from a local TKLM, then he will need 2 TKLM server licenses and 4 or 6 drive licenses respectivel TKLM: Pricing and Licensing TKLM A TKLM C TKLM B TKLM D 6 drives 4 drives Primary Site Secondary Site TKLM Server License includes: 1 Production Copy of TKLM Multiple non-production copies of TKLM No longer includes first 2 tape drive or disk resource activations. TKLM Tape Drives (no longer called “RVU’s”): Authorization to add 1 more tape drive to drive table

64 Tape Drive Based Encryption
Things to Remember for IBM i Library Managed Encryption (LME) only Fibre or SAS drives only, not LVD SCSI ie choose fibre/SAS LTO4/LTO5/LTO6 or fibre TS1120/30/40 Drives must be in a tape library LTO4/5/6 or TS1120/30/40 Media BRMS is helpful for tracking encrypted / non-encrypted tapes Include Implementation Services IBM Rochester Lab Services - contact Mark Even Policy – Who determines what gets encrypted and what does not get encyrpted. For data that gets encrypted, who determines which public key (Key Label) is used. Key Management – Who generates the data keys. Who manages and controls the key encrypting keys. AME/SME/LME is specified at the Tape Drive. This drive attribute can be updated by the 3584 Web Interface for 3584 environments. It is updated by the CE at the tape drive for 3494, Silo, and Standalone environments. Conceptually, three different methods of encryption will be supported. Application managed, System Managed and Library Managed. The application layer initiates data transfer for tape storage (e.g. TSM). We are planning on providing facilities for an application to generate and provide keys to the TS1120 tape drive. In this case, the application would be responsible for the creation, storage and management of the cartridge data keys. We are defining the system layer as everything between the application and the tape drives (e.g. the OS, DFSMS, device drivers, and FICON/ESCON controllers). We are planning to provide a new key manager software program that will work in conjunction with the system layer to provide keys based on the policies established for encryption of tape data. In addition, in open systems environments, we are planning to provide Library Managed encryption. The library layer is the enclosure for the tape drives and we use an out-of-band interface to each tape drive to transmit the data encryption request and keys. Policies may be implemented based on cartridge volume serial numbers, logical libraries, or drives. We are using this nomenclature; application managed, system manager and library managed encryption across IBM development teams, to assist us in communicating with each other and in communicating customer requirements. Let’s turn to the next page to discuss some of the additional features of this approach to the IBM encryption solution. Note: Brocade Encrypting switches are not supported for IBM i saves

65 Support and Troubleshooting IBM i Tape and ProtecTIER

66 Save / Restore Group PTF
Order the special PTF # shown for your release, and you will get a group of fixes related to save/restore For more information, see the url at the bottom of this page

67 For more information, see the url at the bottom of this page
BRMS Quarterly PTF BRMS combines all the fixes each quarter into 1 giant PTF that can be tested as a whole. This list shows date and PTF# of the latest fix For more information, see the url at the bottom of this page

68 BRMS Enhancements via Data Area
BRMS sometimes does enhancements between releases or customer-specific function that are controlled by setting a data area. These enhancements are listed on the BRMS website Some of them are related to ProtecTIER For more information, see the url at the bottom of this page Choose the various topic areas at the top of the page and scroll through the various functions

69 Troubleshooting - Flight Recorders
Tape Flight Recorders BRMS Flight Recorders Call QTADMPDV device name Immediately after drive problem Gathers Joblogs, PTF listings, hardware listings, VLOGs, SRC codes, Service dumps Automatically creates a problem entry Collected automatically in tmp/BRMS directory To submit, use Operations Navigator to move the files from the IBM i IFS to your desktop, then them to support Save/Restore Flight Recorders Save/Restore Problem Data Collected automatically in /TMP/QSR or QSR/QSR if PTF SI37104 is applied To submit, use Operations Navigator to move the files from the IBM i IFS to your desktop, then them to support Much smaller volume of data Began at V6R1 CALL QSRSRV PARM("DATA") Collects a LOT of data – BRMS flight recorders, Save/Restore flight recorders etc but NOT tape flight recorders Submit via WRKPRB Google on IBM for details

70 SAN Design

71 For Details, see “SAN Design for IBM i Tape and ProtecTIER
SAN Design for IBM i Firm Rules Multipath is not supported for Tape Maximum Addresses per Tape Fibre adapter Fc 2765, 5704, 5761 – 16 devices Fc 5749, 5774, 5735 etc – 64 devices * 2 ports Max Drives in Library 32 Drives per TAPMLBxx attached 92 Drives per TAPMLBxx total (IOP’d) 256 Drives per TAPMLBxx total (IOPless) Prior to V7R1 and Gen 2 IOPless driver, disparate drives must be separated via Separate tape adapter cards *or* Separate tape library partitions Can’t Pool Drives across IOP’d/IOPless cards Best Practices Put tape Adapters alone on an IOP or virtual IOP so they can be reset Don’t Mix Disk/Tape on a Fibre adapter (eg on IOPless cards) Plan Ahead for alt-install if using fibre cards with IOP (non-boot) Design for Performance and Resiliency

72 3995 Optical Migration for POWER7

73 www-03.ibm.com/systems/i/hardware/storage/optical/
Contact Mark Even in Rochester Lab Services for details: 3995 Optical Migration for POWER7 The Devices The Problem 3995 – IBM Logo’d Optical (HP and Plasmon) Not supported on POWER7 399F – Plasmon G Series (SW support by IBM) OK on POWER7 on IOPless LVD SCSI 3996 – IBM Logo’d Plasmon G Series The Challenge Keep access to Optical Data on POWER7 Ideally do so with no changes to optical application interfaces or index The Alternatives The Solution Migrate the data to the IBM i 7.1 Image Catalog Virtual Optical Media Library V7.1 + PTFs + Lab Services License (will run in limited mode on 6.1) Migrate the Data to another solution such as - Newer Optical (399F / 3996) - Network Attached Options (eg DR550, Info Archive, non-IBM appliances, etc) Fast Data Migration Using Image Catalog (V5R4M5 onwards) (Migrates data to new media via the IBM i Image Catalog at 5-10 times the speed of platter copies) IBM i website re 3995 / 3996: www-03.ibm.com/systems/i/hardware/storage/optical/ The first section has excellent materials regarding migration options © 2012 IBM Corporation

74 Notes / 3996 Model Details 3995-x4x – attached via HVD SCSI (really old – white optical boxes) Supported on any system/release with a supported HVD card Made by HP 3995-C2x attached via Ethernet LAN (really old –first black optical boxes) Supported on any system/release with a supported LAN card 3995-C4x attached via HVD SCSI Supported on any system/release with a supported HVD card 399F - Plasmon G-Series – customers bought directly from Plasmon Plasmon provided HW support, IBM Rochester provided SW support 6 enterprise models, 2 midrange models Normally attached via LVD SCSI with or without an IOP Option to attach via HVD SCSI Simple “swap 1 card” upgrade from HVD to LVD, if still available 3996 attached via LVD SCSI IBM relogo’d the 2 midrange models of 399F (The most popular ones) Supported on any system/release with a supported LVD SCSI card, with or without an IOP Made by Plasmon HVD SCSI cards are fc 6501, 6534, 2729, All require an IOP. Only fc 2749 is supported on POWER6, and only for optical & 3590. On POWER7, only the IOPless LVD SCSI card is supported All optical boxes above are supported on POWER6 so long as you put them on a supported IOA 3995 is not supported on POWER7. 399F/3996 are supported on POWER7 systems with IOPless LVD SCSI cards © 2012 IBM Corporation

75 Recap Tape Product Line Virtual Tape New BRMS Report for Tape Planning
Gen 2 IOPless Tape Driver Tape Drive Sharing Tape Adapters Tape Info APARS Optimizing your Tape Performance BRMS Parallel Save PTF Tape Encryption Troubleshooting SAN Design Highlights Optical Replacement Options

76 Session Evaluations ibmtechu.com/vp Prizes will be drawn from Evals

77 86

78 Questions ?

79 Other IBM i specific Information for ProtecTIER
Appendix Other IBM i specific Information for ProtecTIER

80 IBM i IOPless Support for ProtecTIER - Restrictions
Restriction #1: IBM i alt-IPL (reload) Restriction #2: TS7650 IPL with VIOS ProtecTIER VIOS SAN Switch Other Tape in VIOS Zone IBM i (this only applies to IOPless fibre cards, not the older IOP’d cards) Fall 2012 Note: IBM i timers have been adjusted to reduce the chance of disruption. In certain circumstances, customers may opt to IPL the ProtecTIER without removing it from the zone IBM i To D-IPL your IBM i, use ProtecTIER LUN masking so the adapter card can only see a single virtual drive (the one with the SAVSYS in it) D-IPL SAVSYS Tape TS7650 Node 0 Node 1 If ProtecTIER is attached to VIOS, remove ProtecTIER port(s) from the VIOS SAN Zone before IPLing the ProtecTIER, otherwise it may disrupt other devices IPL Virt Drive 0 Virt Drive 2 Virt Drive 3 Virtual Library

81 BRMS DUPMEDBRM Compaction PTF
Exposes the COMPACT parameter so you can compact the physical volumes when you dup from ProtecTIER TS7650 Virtual Tape Saves are not compacted so take 3x as much virtual media (gained back with dedup) Part of June 2010 BRMS PTF V5R4: SI38733 IBM i 6.1: SI38739 IBM i 7.1: SI38740 This PTF has been around since June 2010 – most shops likely have it already, but may need to turn it on IBM i With PTF With the PTF, DUPMEDBRM can request compaction so uses less media Behavior: V5R4: control via Data Area Q1ADUPCOMP in QTEMP can be set to *FROMFILE, *YES, *NO IBM i 6.1 / 7.1 COMPACT(*YES) is available help text via web For new IBM i 6.1 auto-dup feature, need to change command default on DUPMEDBRM to *DEV Future releases: COMPACT(*YES) will be available with regular help text TS3500 Before PTF Before the PTF, dups used the same compaction parameter as the source volume, so more physical media was needed © 2012 IBM Corporation 90 90

82 BRMS DUPMEDBRM Compaction PTF - Ctd
When you run a save to ProtecTIER, if you leave the compaction parameter at *DEV, then IBM i knows you're sending the save to a virtual library and knows NOT to do compaction, since we want ProtecTIER to find the dups first and THEN run the LZ1 algorithm to do the compaction. When you do DUPMEDBRM you DO want to have compaction on the physical tape. However, prior to the PTF, the DUPMEDBRM command did not "expose" the compaction parameter ... it just assumed that you wanted the same compaction setting as you'd used for the original save. So in our case on IBM i, the physical tape dup took 3 times as long and 3 times as much media since it didn't get 3:1 compaction that is typical on IBM i This PTF fixes the problem by letting you set the compaction parameter on the dup. You want to set it to *YES or *DEV. From V6R1 onwards, the PTF lets you actually see the compaction parameter so you can set it. At V5R4 you have to control it via a data area. The details of the PTF are shown on the next page

83 BRMS DUPMEDBRM Compaction PTF – Ctd
Details are on the BRMS Wiki In the left sidebar, choose “Devices”, “Virtual Tape libraries”, “ProtecTIER”, then choose this item from the list in the main panel QTEMP/Q1ADUPCOMP In V5R4, the COMPACT parameter on the DUPTAP command is being externalized via a data area in BRMS. This data area(QTEMP/Q1ADUPCOMP) is of length 9. This would yield current behavior. (or if NO data area): CRTDTAARA DTAARA(QTEMP/Q1ADUPCOMP) TYPE(*CHAR) LEN(9) VALUE('*FROMFILE') This would yield a *YES behavior (which is wanted on the TS7650 to physical 3584 tape) CRTDTAARA DTAARA(QTEMP/Q1ADUPCOMP) TYPE(*CHAR) LEN(9) VALUE('*YES') This would yield a *NO behavior if needed: CRTDTAARA DTAARA(QTEMP/Q1ADUPCOMP) TYPE(*CHAR) LEN(9) VALUE('*NO') This will only apply to the job that the DUPMEDBRM(s) is being run in, if the DUPMEDBRM is done in batch. The creating of the data area must be done in the batch job also. In V6R1 and above, the COMPACT parameter has been added to the DUPMEDBRM command. Notes: 1. The auto duplication feature available from V6R1 onwards will not directly support the new parameter on the media policy as not all the parameters are being put on this feature. However, by changing the DUPMEDBRM command default to *DEV for the Compact parameter, the behavior can be acquired. 2. PTFs SI38733 (V5R4M0) or SI38739 (V6R1M0) or SI38740(V7R1M0) or their superseding PTFs are required.

84 BRMS Support for Remote Dups to Physical
When BRMS writes a save, you can mark the tape for later duplication Normally, BRMS does not allow you to move the tape offsite until it has been duplicated since this doesn’t make sense in a physical tape world In the ProtecTIER world, if you are making physical tapes at your remote site prior to duplication, you want to be able “move” the tapes (eg do a ProtecTIER “visibility switch”) prior to replication. This PTF allows that function The following PTFs or their superseding PTFs are required: V5R4M0 SI42923 IBM i 6.1 SI42924 IBM i 7.1 SI42925 Remote Dup Automation For details, see the BRMS Wiki In the left sidebar, choose “Devices”, “Virtual Tape libraries”, “ProtecTIER”, then choose this item from the list in the main panel

85 BRMS Support for Remote Dups to Physical - ctd
This function is turned on via a data area in current releases (up to and including IBM i 7.1), and will be added as an official BRMS command in future releases To override a move policy to allow movement when a volume is marked for duplication: CALL QBRM/Q1AOLD PARM('MOVMRKDUP ' '*SET ' 'move policy’ ‘Y’) To remove the override for a move policy that allows movement when a volume is marked for duplication: CALL QBRM/Q1AOLD PARM('MOVMRKDUP ' '*SET ' 'move policy’ ‘N’) To display all overrides for move policies that allow movement when a volume is marked for duplication: CALL QBRM/Q1AOLD PARM('MOVMRKDUP ' '*DISPLAY ') To remove all overrides for move policies that allow movement when a volume is marked for duplication: CALL QBRM/Q1AOLD PARM('MOVMRKDUP ' '*CLEAR’) Remote Dup Automation Note: In releases IBM i 7.1 and earlier, there will be no synchronization of this behavior to other systems in the BRMS network. Each system wishing to use this new function will need to run the commands above. In releases following IBM i 7.1, this restriction will be removed

86 BRMS Parallel Save OPTBLK PTF
Many ProtecTIER customers use BRMS parallel saves to increase the throughput of their backups BRMS Parallel Saves (both parallel-parallel and parallel-serial) have been using small blocks (32K) since the function was introduced in V4R4 The June 2011 BRMS PTF switches them to use large blocks (256K approx). This can improve parallel save performance dramatically, since it requires much less CPU to run the backup SI42923 (R540) SI42924 (R610) SI42925 (7.1) Possible BIG Performance Improvement Customers who have tried parallel saves in the past and found them not helpful, should go back and retry once they apply the PTF

87 BRMS Initialize on Expiry function
By default, the ProtecTIER Virtual tapes are not cleaned up when they expire – the data is held on them until the virtual volume is re-written. This means that: A lot of ProtecTIER space is tied up with data that is really expired. If there is a large scratch pool, this amount can be excessive ProtecTIER de-dup ratios look really good since ProtecTIER doesn’t know these copies are expired ProtecTIER cleanup will happen during the backup window when the expired tape is overwritten, which may impact backup performance From V5R4 onwards, BRMS can ask ProtecTIER to scratch the virtual media when it expires, specifically when the STREXPBRM command runs to expire the tape in BRMS. This is typically run during daily BRMS Maintenance. Note: the expiry process requires the virtual media to be mounted in a virtual drive This allows the user to control when the ProtecTIER cleanup is done The following PTFs or their superseding PTFs are required: V5R4M0 SI45327 IBM i 6.1 SI45326 IBM i 7.1 SI45325 Virtual Tapes Initialize on Expiry

88 BRMS Initialize on Expiry function - ctd
This function is turned on via a data area in current releases (up to and including IBM i 7.1), and will be added as an official BRMS command in future releases To turn on this option to initialize on expiration during STRMNTBRM: CALL QBRM/Q1AOLD PARM('INZONEXP ' '*SET ' 'media class' 'Y') To turn off this option to initialize on expiration during STRMNTBRM: CALL QBRM/Q1AOLD PARM('INZONEXP ' '*SET ' 'media class' 'N') To display all media classes that have this option turned on: CALL QBRM/Q1AOLD PARM('INZONEXP ' '*DISPLAY ') To remove all media classes that have this option turned on: CALL QBRM/Q1AOLD PARM('INZONEXP ' '*CLEAR ') Virtual Tapes Initialize on Expiry Note: In releases IBM i 7.1 and earlier, there will be no synchronization of this behavior to other systems in the BRMS network. Each system wishing to use this new function will need to run the commands above. In releases following IBM i 7.1, this restriction will be removed

89 BRMS PRTRPTBRM *CTRLGRPSTAT
This report is very helpful for analyzing your backup environment and sizing Tape or ProtecTIER See details earlier in this presentation June 2012 BRMS PTF V5R4 SI46335 (Partial Support – see note) V6R1 SI46339 IBM i 7.1 SI46340 Note: Reports for a V5R4 system must be created on a V6R1 or IBM i 7.1 system that is in the same BRMS network. Use the “From System” parameter

90 IBM i 15,000 Slot Library Enhancement
Tapes can be stored in many places in a tape library, collectively called “Storage elements”: Slots Convenience IO station Tape Drives Library Robotic Grippers Historically, IBM i allowed up to 5,000 storage elements in a library or library partition With physical tape, this was plenty for most customers, since most cartridges were stored offsite With virtual tape, every virtual cartridge needs a slot, whether the main copy is at the home site or the replicated site. Hence 5,000 storage elements was restrictive Once the max was reached, a new tape library, tape library partition or VTL was needed This restriction is lifted by the following July 2012 IBM i PTFs that increase the max library size to 15,000 storage elements: IBM i 6.1.1: MF50093, MF55406 IBM i MF55409 15,000 Slots!

91 256 drives in a Virtual Tape Library (IOPless)
For certain operations, the IBM i asks the tape library to send a description of the tape library metrics – eg # slots, # drives, # grippers, etc. This data is received in a buffer on the IBM i tape adapter card and includes information about ALL drives in the library, not just the ones attached to the IBM i system that is issuing the request On IOP’d fibre cards: the buffer can hold information about up to 92 tape drives ON IOPless fibre cards: Without any PTFs, the buffer can hold information about up to 250 tape drives With the PTF below, the buffer can hold information about up to 256 tape drives If you need to have a library with more drives, please contact support to see if we can test a larger library still Hence, the maximum drives in a virtual library that is attached to IBM i is 92 and 250 / 256 accordingly If you have more drives than this in your virtual library, odd things will happen PTFs to allow 256 (vs 250) Tape drives in a Tape Library or Virtual Tape Library attached to IBM i (IOPless adapters) V6R1M1 - MF56115 IBM i MF56114 92 Drives 256 Drives IOP’d IOPless

92 Special notices This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM offerings available in your area. Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY USA. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or guarantees either expressed or implied. All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions. IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal without notice. IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies. All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document should verify the applicable data for their specific environment. Revised September 26, 2006

93 Special notices (cont.)
IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 5L, AIX 6 (logo), AS/400, BladeCenter, Blue Gene, ClusterProven, DB2, ESCON, i5/OS, i5/OS (logo), IBM Business Partner (logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries, Rational, RISC System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, Active Memory, Balanced Warehouse, CacheFlow, Cool Blue, IBM Systems Director VMControl, pureScale, TurboCore, Chiphopper, Cloudscape, DB2 Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Parallel File System, , GPFS, HACMP, HACMP/6000, HASM, IBM Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture, Power Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2, POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, POWER6+, POWER7, System i, System p, System p5, System Storage, System z, TME 10, Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A full list of U.S. trademarks owned by IBM may be found at: Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. AltiVec is a trademark of Freescale Semiconductor, Inc. AMD Opteron is a trademark of Advanced Micro Devices, Inc. InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Linear Tape-Open, LTO, the LTO Logo, Ultrium, and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries or both. Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both. NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both. SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are trademarks of the Standard Performance Evaluation Corp (SPEC). The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org. TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC). UNIX is a registered trademark of The Open Group in the United States, other countries or both. Other company, product and service names may be trademarks or service marks of others. Revised December 2, 2010

94 Notes on benchmarks and values
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark consortium or benchmark vendor. IBM benchmark results can be found in the IBM Power Systems Performance Report at . All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, the latest versions of AIX were used. All other systems used previous versions of AIX. The SPEC CPU2006, LINPACK, and Technical Computing benchmarks were compiled using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL C for AIX v11.1, XL C/C++ for AIX v11.1, XL FORTRAN for AIX v13.1, XL C/C++ for Linux v11.1, and XL FORTRAN for Linux v13.1. For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor. TPC SPEC LINPACK Pro/E GPC VolanoMark STREAM SAP Oracle, Siebel, PeopleSoft Baan Fluent TOP500 Supercomputers Ideas International Storage Performance Council Revised December 2, 2010

95 Notes on HPC benchmarks and values
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark consortium or benchmark vendor. IBM benchmark results can be found in the IBM Power Systems Performance Report at . All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, the latest versions of AIX were used. All other systems used previous versions of AIX. The SPEC CPU2006, LINPACK, and Technical Computing benchmarks were compiled using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL C for AIX v11.1, XL C/C++ for AIX v11.1, XL FORTRAN for AIX v13.1, XL C/C++ for Linux v11.1, and XL FORTRAN for Linux v Linpack HPC (Highly Parallel Computing) used the current versions of the IBM Engineering and Scientific Subroutine Library (ESSL). For Power7 systems, IBM Engineering and Scientific Subroutine Library (ESSL) for AIX Version 5.1 and IBM Engineering and Scientific Subroutine Library (ESSL) for Linux Version 5.1 were used. For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor. SPEC LINPACK Pro/E GPC STREAM Fluent TOP500 Supercomputers AMBER FLUENT GAMESS GAUSSIAN ANSYS Click on the "Benchmarks" icon on the left hand side frame to expand. Click on "Benchmark Results in a Table" icon for benchmark results. ABAQUS ECLIPSE MM5 MSC.NASTRAN STAR-CD NAMD HMMER Revised December 2, 2010

96 Notes on performance estimates
rPerf for AIX rPerf (Relative Performance) is an estimate of commercial processing performance relative to other IBM UNIX systems. It is derived from an IBM analytical model which uses characteristics from IBM internal workloads, TPC and SPEC benchmarks. The rPerf model is not intended to represent any specific public benchmark results and should not be reasonably used in that way. The model simulates some of the system operations such as CPU, cache and memory. However, the model does not simulate disk or network I/O operations. rPerf estimates are calculated based on systems with the latest levels of AIX and other pertinent software at the time of system announcement. Actual performance will vary based on application and configuration specifics. The IBM eServer pSeries 640 is the baseline reference system and has a value of Although rPerf may be used to approximate relative IBM UNIX commercial processing performance, actual system performance may vary and is dependent upon many factors including system hardware configuration and software design and configuration. Note that the rPerf methodology used for the POWER6 systems is identical to that used for the POWER5 systems. Variations in incremental system performance may be observed in commercial workloads due to changes in the underlying system architecture. All performance estimates are provided "AS IS" and no warranties or guarantees are expressed or implied by IBM. Buyers should consult other sources of information, including system benchmarks, and application sizing guides to evaluate the performance of a system they are considering buying. For additional information about rPerf, contact your local IBM office or IBM authorized reseller. ======================================================================== CPW for IBM i Commercial Processing Workload (CPW) is a relative measure of performance of processors running the IBM i operating system. Performance in customer environments may vary. The value is based on maximum configurations. More performance information is available in the Performance Capabilities Reference at: Revised April 2, 2007


Download ppt "Tape and ProtecTIER Update for IBM i folks"

Similar presentations


Ads by Google