Download presentation
Presentation is loading. Please wait.
Published byDwight Oliver Modified over 6 years ago
1
Advanced Technical Skills Education Jim Fisher and Carl Bauske
Gaithersburg, MD IBM Magnetic Tape Advanced Technical Skills Education Presents TS7700 Introduction What Is It? Jim Fisher and Carl Bauske Update for Release 2.0 September 7, 2011 11/9/2018 - IBM Confidential
2
Topics History and Architecture Hardware Components Basic Concepts Multi-Cluster Grids
3
Over 56 Years of Tape Innovation
Starting in 1952 IBM 726 Tape Unit 7,500 characters per second 100 bits per inch and continuing through 2011 IBM TS1140 Tape Drive up to 250MB/s1 up to 4TB1 IBM LTO5 Tape Drive Up to 120 MB/s1 Up to 1.5TB1 2002 LTO Gen2 2007 LTO Gen4 2000 LTO Gen1 2004 LTO Gen3 2010 LTO Gen5 1848 George Boole invents binary algebra 1952 IBM 726 1st magnetic tape drive 1964 IBM 2104 1st read/back drive 1995 IBM 3590 O P E N E N T E R P R I S E 2008 3592 Gen3 1928, IBM invents the 80 column Punch Card 1959 IBM 729 1st read/write drive 1984 IBM 3480 1st cartridge drive 1999 IBM 3590E 2003 3592 Gen1 2005 3592 Gen2 2011 3592 Gen4 1 represents maximum native performance or cartridge capacity - IBM Confidential
4
IBM Tape Technology History
Year Product Capacity (MB) Transfer Rate (KB/s) 1952 IBM 726 1.4 7.5 1953 IBM 727 5.8 15 1957 IBM 729 23 90 1965 IBM 2401 46 180 1968 IBM 2420 320 1973 IBM 3420 1250 1984 IBM 3480 200 3000 1989 IBM 3490 4500 1991 IBM 3490E 400 9000 1992 800 1995 IBM 3590-B1A 10,000/20,000 1999 IBM 3590-E1A 20,000/40,000 14,000 2002 IBM 3590-H1A 30,000/60,000 2003 IBM 3592-J1A 300,000,000 40,000 2005 IBM 3592-E05 700,000,000 100,000 2008 IBM 3592-E06 1,000,000,000 160,000 2011 IBM 3592-E07 4,000,000,000 250,000
5
up to 1000 GB native capacity
Virtual Tape Concepts Virtual tape drives Appear as multiple 3490E tape drives Shared / partitioned like real tape drives Designed to provide enhanced job parallelism Requires fewer real tape drives TS7700 offers 256 virtual drives per cluster Tape volume caching All data access is to disk cache Removes common tape physical delays Fast mount, positioning, load, demount Up to 28 TB / 440 TB of cache (uncompressed) Volume stacking (TS7740) Designed to fully utilize cartridge capacity Helps reduces cartridge requirement Helps reduces footprint requirement 100 101 1FF . . . Virtual Drive 1 Virtual Drive 2 Virtual Drive n Tape Volume Cache Virtual Volume 1 Virtual Volume 2 Virtual Volume n 3592 up to 1000 GB native capacity Logical Volume 1 Logical Volume n
6
Logical Volumes and Stacked Physical Volumes
7
VTS/TS7700 Evolution First Introduced Device Backend Drive
Host Interface Host IO Rate Virtual Drives Logical Volumes Logical Volume Size Cache 1996 3495-B16 3590 ESCON 9 MB/sec 32 50,000 400, 800 MB 144 GB 1997 3494-B16 1998 3494-B18 50 MB/sec 128 250,000 1.7 TB 2001 3494-B10 3590 or 3592 ESCON FICON 64 400 to 2000 MB 432 GB 3494-B20 3590 and/or 3592 150 MB/sec 256 500,000 400 to 4000 MB 2006 TS7740 3592 FICON 600+ MB/sec 1,000,000 6 TB 2008 13.7 TB TS7720 ---- 70 TB 2011 TS7740/ 1000+ MB/sec 2,000,000 400 to 6000 MB 28.2TB 440TB
8
Benefits of TS7700 Virtualization Engine
Can fulfill all tape requirements. DASD Dumps as well as very small volumes. Fills large capacity cartridge. Putting multiple virtual volumes into a stacked volume, TS7740 uses all of the available space on the cartridge. DFSMShsm, TMM, or Tape Management System is not required. TS7700 is a hardware stacking function; no software prerequisites are required. More tape drives are available. Data stacking is not the only benefit; TS7700 emulates 256 virtual drives per cluster, allowing more tape volumes to be used concurrently. There is little management overhead. The stacking and copying of data is performed by the TS7700 management software, without host cycles. Provides off site storage of physical tapes through Copy Export.
9
Installation of TS7700 Virtualization Engine
JCL modifications are not required. The JCL references a virtual volume and does not need modification. Controlled by DFSMS. The whole of a grid presents the image of a single DFSMS library. The library image contains virtual tapes and virtual drives associated with each other. No new software support is required. TS7700 works with a variety of mainframe operating systems including z/OS and SMS. No additional software product is required for its implementation in your current environment. Simple synergy with Tape Management Systems. Host can influence TS7700 behavior if desired. ACS routines can be used to assign storage constructs.
10
TS7700 Modular/Scalable Architecture
HOST DFSMS Tape Management System 3494E Driver Architecture separates the functionality of a virtualization solution into two pieces Virtualization Node (Vnode) - handles all virtual tape device and media handling functions, encapsulates host data in an object stored on the disk cache arrays Hierarchical Storage Management Node (Hnode) - manages the objects stored on disk cache and physical tape media, manages the data movement between disk cache, physical tape media & peers A Vnode and an Hnode make up the function of a TS7700 Virtualization Engine = Gnode Uses standards-based interfaces for control and data movement between nodes Ethernet & WebSphere for control FCP for site local data Gridding is an integral part of the Hnode architecture Uses standard TCP/IP for interconnection Architecture for multiple peers Solution with physical back end tape is referred to as the TS7740. Disk only version is TS7720. FICON VNode The TS7700 employs a new architectural base that separates the functions associated with how the attached hosts views a storage device from the management of the resultant data through a disk and tape hierarchy. All of the storage device characteristics the attached hosts sees are encapsulated in a Virtualization Node or VNode. It is the job of a VNode to interact with its attached hosts and to read and write data to a logical tape volume stored on a fibre-channel attached RAID disk storage subsystem. All of the storage management tasks of the logical tape volumes on the disk storage subsystem and on physical tape volumes is encapsulated in the Hierarchical Storage Management Node or HNode. A VNode does not need to know how the HNode is to management the data and the HNode does not need to know how the VNode interacts with its attached hosts. VNodes and HNodes do cooperate and use WebSphere Message Queues to communicate with each other. Data is moved between V and H nodes and the cache using FCP. Another key element of the new architectural base is that the ability to communicate with another TS7700 to act in a disaster recovery/business continuity solution is built in. This is unlike the current PTP VTS which uses an additional set of pSeries boxes to link two VTS subsystems together. Disk Cache HNode Meta Data To Peers Tape Cartridges - IBM Confidential
11
TS7700 - Capabilities Tape Volume Cache 256 Virtual Tape Devices
Mainframe Attachment Tape Volume Cache TS7740 RAID 5 1 to TB cache (3 – 84.5TB TS7720 RAID 6 20TB to 440TB cache (60 – 1320 256 Virtual Tape Devices 2 Million Logical Volumes Advanced Policy Management 4 to Physical Drives (TS7740) 3584 Physical Library Support (TS7740) TS7700 High level overview of capabilities of the next gen VTS - IBM Confidential
12
The Engines of the TS7740 and TS7720
Power7 Atlas HV32 Server 4 to 32 processors, up to 256 GB DDR3 memory Significantly scale performance through pluggable processor cards 4-way processor card(s) 8-way processor card(s) Provides scalable processing power for future enhancements As shipping today One 3Ghz 8 processor core card, 16GB Memory for GA Running IBM AIX as host for the Firmware - IBM Confidential
13
TS7700 Virtualization Engine Solutions
Supports RTO/RPO measured in seconds TS7700 Grid Configuration Couples two or more TS7700 Clusters together to form a Grid configuration Hosts can attach directly to all TS7700s All volumes are accessible through any TS7700 cluster in the Grid configuration independent of data consistency Each TS7700 represents 256 virtual 3490E drives I/P based replication Standard TCP/IP over Ethernet Two or Four 1 Gbps Ethernet links RJ45 Copper (Cat 6) or SW Fibre Optic Or Two 10 Gbps LW Fibre Optic Policy-based Selective replication management At volume Level Can be configured for disaster recovery or higher availability environments TS7700 Picture of TS7740 Hosts Drives/Library Optional connections I/P Links Picture of TS7740 Optional Hosts Drives/Library As a business continuation solution, two TS7700 Clusters can be interconnected using standard 1Gb ethernet connections. Local as well as geographically separated connections are supported to provide a great amount of flexibility to address customer needs. The Virtual Tape Controllers and remote channel extension hardware of the prior generation’s PTP VTS have been eliminated, providing the potential for significant simplification in the infrastructure needed for a business continuation solution as well as simplified management. Instead, the interconnect between the TS7700 systems uses standard TCP/IP connections and protocols. The TS7740 Virtualization Engine provides two independent 1GBps Ethernet ports. The Ethernet ports uses copper and it is highly recommended that Cat 6 cabling be used to utilize the bandwidth capability of the 1 Gbps interface. For customers that want to use an optical fiber connection instead of copper, RPQ Q8B3409 can be ordered. With that RPQ, the factory will install shortwave optical adapters in the place of the copper Gb ethernet adapters. For the field, the RPQ will provide the adapters and instructions for IBM service to replace the copper adapters with the shortwave optical fiber adapters. The field replacement of the adapters must be done with the TS7740 offline. The shortwave adapters support a link distance of up to 260 meters using 62.5 micron multi-mode fiber optical cable and 550 meters using 50.0 micron multi-mode fiber optical cable. Like the prior generation’s PTP VTS, with the new TS7700 Grid configuration, data may be replicated between the clusters based on customer established policies. Any data can be accessed through either of the TS7700 Clusters regardless of which system the data resides on. If desired for higher availability, the main processing host can be connected to the FICON channels of the TS7700 Cluster at the remote site. If the local TS7700 Cluster fails or service is needed to be performed on it, customer operations can continue by accessing data through the FICON channels extended to remote TS7700 Cluster. If both clusters are attached to the same host, up to 512 virtual device addresses are available. There is an apar that is required to enable a host to access more than 256 drives. That apar is OA19061. Through the use of standards-based interfaces, a planned extension to a Grid configuration would add a third TS7700 Cluster. That will provide for both local as well as long distance copying of host created data. Grid Configuration I/P replication may greatly simplify the infrastructure and management needed for a disaster recovery solution as compared to FICON extender based solutions - IBM Confidential
14
TS7700 Two Cluster GRID Configuration for High Availability
The two TS7700s are located at one site Interconnected through a Local Area Network Hosts are attached to both TS7700s Data is available through either TS7700 24 X 7 Data availability for local host(s) Hosts 3592/TS3500 I/P Links LAN 3592/TS3500 The TS7700 Grid configuration may be configured for a high availability environment, a disaster recovery environment or both. This slide shows a typical configuration for a high availability environment. Both of the TS7700s in the Grid configuration are located at the same site. They are connected through a Local Area Network. Production workloads are written using the virtual tape device addresses in both of the TS7700s. Data written to one TS7700 is replicated to the other. In the event of a failure or the need to service one of the TS7700s, customer applications may continue to access all logical volumes through the remaining operational TS7700, assuming it has a valid copy. In order to access the logical volumes owned by the unavailable TS7700, the operational TS7700 is enable for ownership takeover. This allows it to access and modify any volumes that were owned by the unavailable TS7700. Ownership takeover mode is very similar to the PTP VTS’s Read/Write Disconnected mode which allowed a PTP to provide access to data in the event the master VTS became unavailable. Please turn to the next slide. - IBM Confidential
15
TS7700 Two Cluster GRID Configuration for Disaster Recovery
Two TS7700s are located at two geographically separated sites Interconnected through a Wide Area Network The disaster recovery host is connected to the remote TS7700 If local TS7700 is unavailable, data is available at remote TS7700 Ownership Takeover Manager enabled automatically when one of the TS7700 is not available Production Site Hosts 3592/TS3500 I/P Links WAN This slide shows a typical configuration for a disaster recovery environment. One of the TS7700s in the Grid configuration is located at the production site and one is located at the disaster recovery site. The TS7700s are connected through a Wide Area Network. Production workloads are written using the virtual tape device addresses in the TS7700 located at the production site and replicated to the TS7700 at the disaster recovery site. In the event of a disaster that left the production site unusable, customer operations may resume at the disaster recovery site by running production applications on the disaster recovery host. Prior to starting the production runs on the disaster recovery host, the TS7700 at the disaster recovery site is enabled for ownership takeover. This allows it to access and modify any volumes that were owned by the production site when it became unusable. Note that there are no connections between the production host and the TS7700 at the disaster recovery site. If it is necessary to service the TS7700 at the production site or if it has failed, there will be no access to the data from the production site during service. Please turn to the next slide. Disaster Recovery Hosts 3592/TS3500 Disaster Recovery Site - IBM Confidential
16
TS7700 Three Site Grid – High Availability and Disaster Recovery
Remote Disaster Recovery Site Local Production Site / Campus / Region System z FICON Attachment DWDM or Channel Extension (optional) TS7700 Cluster-0 TS7700 Cluster-1 TS7700 Cluster-2 WAN Copies to remote site are usually deferred Copies between local site are usually immediate
17
Topics History and Architecture Hardware Components Basic Concepts Multi Cluster Grids
18
TS7740 Virtualization Engine Components
TS7740 Virtualization Engine (3957 Model V07) Power7 Atlas HV32 Server One 3Ghz 8 processor core card, 16GB Memory Runs the V and H nodes (Gnode) Integrated Enterprise Library Controller TS7740 Cache Drawer (3956 Model CX7) 16 15K 600GB FC HDDs 7.04 TB usable capacity (after RAID 5 and spares) Three maximum drawers – Total TB additional TS7740 Cache Controller (3956 Model CC8) Disk RAID array controller 16 15K 600 GB FC HDDs Adding up to three CX7 drawers provides up to TB total capacity 3952 Model F05 Frame Houses major components & support components Dual Power There are four major components that make up a TS7700 Cluster. The TS7740 refers to a specific vintage of component, in this case, the fourth generation of IBM enterprise virtualization products. The TS7740 Virtualization Engine model V06 is a System P server platform. Its dual-core, two way power5 processor and high speed I/O busses hosts the V and H node functions. The TS7740 Cache Controller model CC6 is an integrated RAID disk controller with storage. It comes configured with 16 15K 146GB hard disk drives. After RAID and hot sparing considerations, it provides a usable capacity of 1.5TBytes. The TS7740 Cache Drawer model CX6 is a disk expansion drawer that ties to the RAID controller. It comes configured with 16 15K 146GB hard disk drives. Like the controller, after RAID and hot sparing considerations, it provides a usable capacity of 1.5TBytes. All of these components along with redundant ethernet switches, are housed in a 3592 model F05 frame. The F05 frame provides for dual power feed to all of the components. The positioning of the components in the frame allocates the space and necessary infrastructure connections for a second TS7740 Virtualization Engine that is planned as a future enhancement to the TS7700 series. The combination of the Virtualization Engine components is called a TS7700 Cluster - IBM Confidential
19
TS1130/TS1140 Support (3592-E06/E07) All drives in the system must be of the same type 3592-J1A or 3592-E05 in J1A emulation mode 3592-E05 in native mode 3592-E06/EU6 3592-E07 Media type/Capacities Drive Type Media Capacity 3592-J1A No longer available JJ 60 GB 3592-E05/TS1120 100 GB JA 500 GB JB 700 GB 3592-E06/TS1130 128 GB 640 GB 1000 GB 3592-E07/TS1140 Supported 9/9/2011 JA Read Only Support in 2012 1600 GB JC 4000 GB
20
LAN/WAN TS3500 Tape Library Mainframe Host TS7740 TS7700
Four FICON Paths TCP/IP TCP/IP TS7700 Two 8 Gbs Fibre Channel Fibre Switch Fibre Channel 3592 Tape Drive TS3500 Tape Library
21
TS7720 Disk Only VE for Mainframe
Provides the benefits of the TS7740 Virtualization Engine without the need for physical tape All logical volumes are kept in the disk cache Number of logical volumes depends on their size Same functionality of the TS7740 System Transparent support on z/OS, z/VM, z/VSE, TPF & z/TPF Standalone and business continuation configurations Automatic cache management in grid configuration - IBM Confidential
22
TS7720 Disk Only VE Components
TS7720 Disk Only VTE (3957-VEB) Power7 Atlas HV32 Server One 3Ghz 8 processor core card, 16GB Memory Runs the V and H nodes (Gnode) TS7720 Cache Drawer (3956-SX7) RAID array expansion (RAID 6) 16 – 7.5K RPM 2 TB SATA HDDs ~24TB TB Usable (after RAID and spares) TS7720 Cache Controller (3956-CS8) Disk RAID array controller (RAID 6) ~20 TB Usable (after RAID and spares) 3952 Model F05 Frame Ethernet routers for service and management interface functions Houses major components & support components Dual Power - IBM Confidential
23
Up to 441TB of cache (uncompressed)!!!!!!!!!!!!!!
TS7720 Expanded Cache Up to 441TB of cache (uncompressed)!!!!!!!!!!!!!! Cache can be added in single drawer increments Minimum TS7720 Cache Configuration Maximum TS7720 Cache Configuration Base Frame CS8 – 2TB XS7 – 2TB Base Frame CS8 – 2TB XS7 – 2TB Expansion Frame 23.84 TB 23.84 TB 23.84 TB 23.84 TB 23.84 TB 23.84 TB 23.84 TB 23.84 TB 19.84 TB 23.84 TB 23.84 TB 23.84 TB 23.84 TB 23.84 TB 23.84 TB 23.84 TB 23.84 TB CS8 – 2TB 19.84 TB 19.84 TB 19.84 TB 19.84 TB TB TB TB
24
TS7720 Multi-Cluster Grid LAN/WAN Mainframe Host TS7720 TS7700 FICON
TCP/IP TCP/IP TS7700 TS7720 Multi-Cluster Grid
25
TS7720 - Enhanced Removal Policies
Pinned – Volumes which are pinned will never be removed from the TS7720. Prefer Remove – Volumes assigned to this group will be removed first in least recently used order. Prefer Keep – Volumes assigned to this group will be removed last in least recently used order. Only when all valid Prefer Remove candidates have been removed will these volumes be removed. Minimum Retention Time for Prefer Keep and Prefer Remove LAN/WAN TS7720 Cluster Data Migration Very Large Cache Intermediate Cache Tape Migrated Data Access Drives/Library TS7740 Cluster 25
26
TS7740 and TS7720 – FICON Host Interface performance increments.
TS7700 – Capacity on Demand TS7740 and TS7720 – FICON Host Interface performance increments. 100 MB/sec up to 1000 MB/sec (unbridled) Increment is 100 MB/sec feature codes. TS7740 and TS7720 – Cache sizes from one drawer up to maximum configurations. TS7740 / TS3500 – Backend Tape Drives 4 to 16 drives TS7740 / TS3500 Cartridge and robotic capacity Single frame to 18 frames 26
27
Topics History and Architecture Hardware Components Basic Concepts Multi-Cluster Grids
28
Advanced Policy Management (APM)
Enables customer control of data management and placement in a TS7700 Similar to existing DASD data management and placement controls Extension to DFSMS storage constructs and automatic class selection routines Enabled by customer through the TS7700's controls Establishes the infrastructure for future functional controls Future functions not dependent on host software changes Only requires setup at library TS7700 will operate with defaults only The TS7700 has default settings as shipped. The user can take the defaults and only customize settings as required.
29
APM - Actions Physical Volume Pooling (TS7740 only)
Associates logical volumes with a set of physical volumes Controlled through the Storage Group construct Selective Dual Copy (TS7740 only) Two copies of a logical volume on different physical volume pools Copy Export – Exported copies are still managed by the grid but are stored offsite Controlled through the Management Class construct Replication Policies (Copy Consistency Points) Control of logical volume copies between TS7700s in a grid
30
APM - Actions Tape Volume Cache Management
Control of which logical volumes have preference to be kept in cache Controlled through the Storage Class construct Logical Volume Size Control the size of logical volumes (Insert size, 1000MB, 2000MB, 4000MB, 6000MB) Controlled through the Data Class construct Logical WORM Control if a Logical Volume is treated as a WORM volume
31
Advanced Policy Management – Storage Constructs
Host TS7700/Library New File Position 1 Allocation TMS Pre-ACS Exit ABC123 Construct Names Actions Performed With Volume Data Class Physical Volume Pooling Selective Dual Copy Grid Replication Policy Cache Management Larger Logical Volumes ACS Routines Storage Class TS7700 Inventory Database Management Class Storage Group Action Definitions Data Class Storage Class Management Class Storage Group Constructs are used for the same reasons for tape or disk data Web Interface
32
Volume Pooling Example
33
Management Class - Replication Policy
Copy Consistency Points Where copies are to reside By distributed library When copies are to be consistent with host that created the data At volume close time (Rewind/Unload: RUN) After volume close time (Deferred) No copy Management Class: PRODRUN LIB001 LIB002 LIB003 At LIB001 and LIB002 RUN Deferred Management Class is the storage construct used to define the copy policy. With the current PTP, the only thing to specify was how the copy is to be made to the other VTS. With the next gen VTS, this has been expanded to specify the copy consistency point for each cluster of the composite library. In the examples, there are three sites, LIB01, LIB02 and LIB03. For each a copy consistency point is defined. In the first example, there are to be copies at LIB01 and LIB02 consistent with the Rewind/Unload command execution and a copy at LIB03 sometime later. This would be pretty typical of having two sites closely located to one another with a third remote site for disaster backup. If the data is being created by the host connected to LIB01, its copy is naturally consistent at RUN time, and like today’s PTP VTS, the copy to LIB02 is made during the RUN command processing. The second example only lists that LIB01 is to have a copy consistent with rewind/unload. Assuming that the host connected to LIB01 is creating the data, the copy is naturally consistent at RUN time. The second example, is actually more than just specifying the consistency point. It also provides information that the composite library (GRID) uses to determine where to route the data initially created by the host. For example - if a host connected to LIB03 specified the second management class, even though the host is attached to LIB03, the virtual volume is actually mounted on LIB01 because that library is to have a copy consistent with RUN time, so it is written there to begin with. There is a lot more to the copy policy than covered in this chart. Examples Management Class: PROD2COP LIB001 LIB002 LIB003 At LIB001 RUN Deferred None At LIB002 - IBM Confidential
34
Cache Management Policy
Controlled with Storage Class
35
Fast Ready Category Attribute
A category can be identified as a scratch category (DEVSUP in zOS) Host will perform scratch mount for a write from beginning-of-tape (BOT) TS7700 will mount a newly initialized tape volume No recall from physical tape required TS7700 presents Ready in a fast manner Delete expired attribute Expire logical volume x hours after it is returned to the category (“Scratched” by host) TS7700 no longer manages the data Space on physical cartridge marked as unused (TS7740) Space in cache marked as unused (TS7720, TS7740) Expire Hold attribute TS7700 will not expire the logical volume for the specified time Provides a guaranteed grace period
36
What is Reclamation? Recovers unused space on stacked volumes
TS7740 stacks logical volumes onto physical volumes TS7740 fills the physical volume Initially the physical volume is 100% full Over time the logical volumes become invalid on the physical After being returned to scratch AND one of the following: Allocated to satisfy a scratch mount Delete Expired time passes Reclaim process transfers the active logical volumes from the partially full physical volumes to an empty physical volume New full physical volume is created Reclaimed volume returns to scratch pool
37
Life Cycle of a Logical Volume
Delete expire time elapses TS7700 no longer manages volume Volume selected for scratch mount and set to private Volume returned to scratch Volume selected for scratch mount TS7700 no longer manages volume Volume selected for scratch mount A scratch mount occurs and a scratch logical volume is selected The logical volume is set to private Time goes by Return To Scratch processing occurs TS7700 still manages the volume The TS7700 stops managing the volume’s data when: The logical volume is selected to satisfy a scratch mount Delete Expire time passes (if set)
38
How is Reclaim Controlled?
Reclaim Threshold Determines at what level of active data a physical volume becomes eligible for reclaim Set on a per pool basis Reclaim uses two back-end drives per reclaim task Makes sure there is at least on drive available for recall Uses CPU resources Inhibit Reclaim Schedule Inhibits reclaim operations during the periods specified Overridden for panic reclaim (less than 2 scratch physical volumes available) Reclaim should be inhibited during peak production periods Host Console Command Can be used to limit the number of reclaim tasks (R1.6) LI REQ, lib_name, SETTING, RECLAIM, RECMMAX, xx
39
Effect of Reclaim Threshold
A higher threshold: Moves more data at reclaim time Reclaims occur more often Consumes back-end drive resources Uses more CPU resources Average amount of data on a cartridge With a Reclaim Threshold of 10%, on average cartridge is 55% full With a Reclaim Threshold of 20%, on average cartridge is 60% full With a Reclaim Threshold of 30% on average cartridge, is 65% full Cartridge Capacity Reclaim Threshold 10% 20% 30% 40% 300 GB 30 GB 60 GB 90 GB 120 GB 500 GB 50 GB 100 GB 150 GB 200 GB 640 GB 64 GB 128 GB 192 GB 256 GB 700 GB 70 GB 140 GB 210 GB 280 GB 1000 GB 400 GB
40
Reclaim Recommendations
Threshold of 20% - 30% for regular pools Threshold of 10% for Copy Export pools Logical volumes are recalled into cache if needed Both primary and secondary copies are pre-migrated to tape If necessary, Inhibit reclaim during heavy production periods If necessary, Adjust maximum number of reclaim tasks as needed
41
Copy Export Production Site Recovery Site Recovery Host
Recovery TS7740 TVC Pool 09 Database Recovery Host Load all exported volumes in lib Restore DB from latest export volume Source TS7740 TVC Pool 01 Pool 09 Production Host Database Dual copy of data Export second copy w/DB Production Site Recovery Site
42
TS7740 Back-End Encryption
Host - zOS, AIX, Linux, Windows, Sun Host Key Store TKLM Crypto Services Network FICON Host - zOS, AIX, Linux, Windows, Sun Key Store TS7740 TKLM The proxy in the TS7740 provides the bridge between the drive FC and the network for TKLM exchanges. Crypto Services Please turn to the slide titled Encryption Review: TS7740 as the EKM proxy. This diagram is the same as the previous slide except the TS7740 takes the place of the control unit as the proxy between the drive and the EKM. The EKMs can reside on the host connected to the TS7740 or can reside on other machines. With the TS7740, encryption is performed on the back-end TS1120 drives. The virtual drives do not encrypt. Encryption is controlled through the existing 32 storage pools on the TS7740. The host uses the existing storage constructs of Storage Group and Management Class to direct which pool of 3592 cartridges the logical volume data is stored on. If a pool is designated as an encrypting pool, the logical volume data will be encrypted when it is written to the 3592 cartridge. Each pool can define which public/private key pairs to use to encrypt the data key on the tape cartridge. Encryption policy is based on Storage Pool which is controlled through Advanced Policy Management (APM): Storage Group and Management Class Fibre - IBM Confidential
43
Logical Write Once, Read Many (LWORM) Volumes
Expands TS7700 support into compliance storage space All clusters in a grid must be at Release 1.6 Logical volume now can support Non-Rewritable storage solutions Emulates Write Once, Read Many (WORM) physical tape functionality Logical volume can only be appended to beginning after last customer data record Supported for IBM Standard and ANSI Labeled volumes Every write from BOT generates a unique ID for tracking and detection volume replacement Supported by both TS7720 and TS7740 Uses Data Class storage construct to specify that a logical WORM volume is to be allocated Retention period controlled by application through a Tape Management System When data is expired, volume can become a candidate for re-use as a new instance of a logical WORM volume or as a full read/write volume There are a variety of governmental requirements for ensuring that data cannot be modified once it has been written to a storage device as well as protecting that data from deletion for a long period of time. Release 1.6 introduces logical WORM support in the TS7700. There are two aspects of logical WORM. The first is preventing previously written data from being modified. This is handled by the TS7700. The second aspect is retention of the WORM volume. This is handled by the tape management system such as RMM and is illustrated by the timeline at the bottom of the slide. During the Tape Management System’s retention period the volume remains non-rewritable. It remains non-rewritable when it is returned to scratch until it is either selected to satisfy a scratch mount or is expired because of delete expired processing. To support logical WORM, all clusters in a grid must be at Release 1.6. Logical WORM is supported for both the TS7720 and the TS7740. Both logical WORM and read/write volumes are supported at the same time in the TS7700. Also, a logical volume can be a WORM volume then a Read/Write volume as long as the volume is written from Beginning of Tape (BOT). Like existing IBM physical WORM media, a unique ID is created for a volser-logical volume instance. This unique ID is used to validate that the volume has not been deleted, re-added to the library and modified. It also prevents a volume from being removed from the TS7700 until the data on the volume has been expired by the tape management system and the customer, through the tape management system, indicates the volume can be released from being a WORM volume. Unlike physical WORM media, the once a logical volume has been released from being a WORM volume, it can be re-used as a scratch volume again. On its re-use as a WORM volume, it will be assigned a new unique ID. In a TS7740, the logical WORM volume is migrated from cache to normal physical tape, but retains its logical WORM characteristics every time it is brought back into cache. For extra security to prevent a physical tape from being removed, have one or more logical volumes on it be modified and then placed back in the library, the data on the physical volume can be encrypted. The logical WORM characteristics for a volume are also maintained across all of the clusters in a Grid configuration and through Copy Export. Volume Returned to Scratch Volume Re-Written from BOT Retention Period Non-Rewritable Scratch Mount DC = LWORM Subject to Being Scratched or Deleted by z/OS DFSMS RMM - IBM Confidential 43
44
Grid Link Status Host Console Request TS7700
CBR1280I Library ATL001A request. Keywords: Status,Gridlink GRIDLINK STATUS V1 CAPTURE TIMESTAMP: :45:32 LINK VIEW LINK NUM CFG NEG READ WRITE TOTAL ERR LINK STATE MB/S MB/S MB/S AA AA LINK PATH LATENCY VIEW LIBRARY LINK LINK LINK LINK 3 LATENCY IN MSEC ATL001B ATL001C CLUSTER VIEW DATA PACKETS SENT: DATA PACKETS RETRANSMITTED: PERCENT RETRANSMITTED: LOCAL LINK IP ADDRESS LINK 0 IP ADDR: LINK 1 IP ADDR: LINK 2 IP ADDR: LINK 3 IP ADDR: Host Console Request TS7700 Grid Link Status
45
zOS Host Console Request Keywords
Description Comp Dist CACHE Requests information about the current state of the cache and the data managed within it associated with the specified distributed library. N/A Y COPYEXP zzzzzz RECLAIM Requests that the specified physical volume that has been exported previously in a copy export operation, be made eligible for priority reclaim. DELETE Requests that the specified physical volume that has been exported previously in a copy export operation, be deleted from the TS7700 database. The volume must be empty. GRIDCNTL COPY ENABLE DISABLE Requests that copy operations for the specified distributed library be enabled/disabled. LVOL Requests information about a specific logical volume. PDRIVE Requests information about the physical drives and their current usage associated with the specified distributed library. POOLCNT [0-32] Requests information about the media types and counts, associated with a specified distributed library, for volume pools beginning with the value in keyword 2. PVOL Requests information about a specific physical volume. RECALLQ [zzzzzz] Requests the content of the recall queue starting with the specified logical volume. PROMOTE Requests that the specified logical volume be promoted to the front of the recall queue, then returns the content of the recall queue. SETTING Several View and set current alert, cache, and throttling values STATUS GRID Requests information about the copy, reconcile and ownership takeover status of the libraries in a Grid configuration GRIDLINK Requests information on the performance of the grid links from the perspective of a specific TS7700. Performance is analyzed every 5 minutes. These are the requests supported in R1.3. Most are valid only for the distributed library as they are requests for primarily how a TS7700 is doing. The following charts in this presentation will cover the details of the requests and examples of the responses. zOS Host Console Request Keywords - IBM Confidential
46
TS7700 – Bulk Volume Information Retrieval
Invoked through a simple series of IEBGENER steps Create a logical volume with the BVIR request (records 1 and 2) TS7700 writes requested data to logical volume (record 3 and above) Read logical volume to retrieve data A flat file is created with the requested data Volume Status Cache Contents Physical Volume Map Point-In-Time Statistics Historical Statistics Physical Media Pools Physical Volume Status Volume Physical Volume Status Pool Copy Audit
47
TS7700 Historical Statistics
Free tools to generate statistics and produce reports available on Tapetools site ftp://submit.boulder.ibm.com/download/tapetools/ Tools are updated as needed Format of both Point-in-Time and Historical statistics are defined in a White Paper
48
Topics History and Architecture Hardware Components Basic Concepts Multi-Cluster Grids
49
TS7700 Hybrid Grid All clusters within a multi-cluster grid configuration have access to all the volumes defined in the composite library regardless of their type. This allows a TS7720 to provide 256 more mount points to the volumes without requiring the addition of physical tape drives and physical library. This can provide increased availability to customer data as well as additional performance. The existing copy consistency point function works the same in the hybrid configuration as it does in a homogeneous configuration. Copies can be targeted to any cluster within the grid.
50
TS7720 and TS7740 Cache Size Considerations
When a virtual device in the TS7740 is allocated The logical volume will remain in cache until it is migrated out based on the tape migration algorithms. Typically the data is treated as PG1 and will remain in cache as long as possible. The Storage Class could be set up to use PG0 in the TS7740 thus removing it from cache soon after it is pre-migrated to tape. The copy to the TS7720 will remain in cache until it ages out of the cache according to the Automatic Removal Policy. When a virtual device in the TS7720 is allocated The logical volume will remain in cache until it ages out according to the removal policies. The copy to the TS7740 is typically treated as PG0 and will be removed from cache soon after it is pre-migrated to tape. TS7720 Cache TS7740 Cache TS7740 Tape TS7740 Virtual Device Allocated TS7720 Virtual Device Allocated
51
Two Cluster Grid – 20/40 Balanced
TS7740 TS7720 Tape Library LAN/WAN Host Local CL0 CL1 This configuration has one TS7720 and one TS7740 with host connectivity to both clusters.
52
Three Cluster Grid – 20/40 Balanced, 40 DR
TS7740 TS7720 Tape Library Host Local Remote LAN/WAN CL0 CL1 CL2 The production site contains one TS7720 and one TS7740 and the remote site contains one TS7740. There are no channel extenders online to the remote cluster. All clusters have a copy of all data.
53
Three Cluster Grid - 20/20 Balanced, 40 DR
TS7720 Host Tape Library Local Remote LAN/WAN TS7740 CL0 CL1 CL2 The production site contains two TS7720s and a remote site contains one TS7740. Two copies of volumes are kept, one at the mounting TS7720 and the second at the remote TS7740. The remote TS7740 is shared by the TS7720s.
54
Hybrid Four Cluster Grid for HA
HA Production Site HA DR Site TS7740 TS7740 WAN High cache-hit percentage due to a production cache size of 440TB without limiting total solution to 440TB This slide shows one possibility for a lower cost, high availability, disaster recovery, 4 cluster grid. Both sites have a TS7740 and TS7720 providing large cache at each location and a deep backend at each location. Both sites, through copy policies, can contain all the customer’s data. All data is accessible when one cluster is not available. Also, all data is available as long as access to one of the TS7740s is available. With Device Allocation Assist, which was introduced in release 1.5, the host will select the best cluster, the TS7720 or TS7740 to satisfy a private mount. The TS7720’s deep cache increases the cache hit ratio dramatically. TS7720 TS7720 With Device Allocation Assist the production host will prefer allocations to a cluster that has the data in cache - IBM Confidential 55
55
TS7720 Front End, TS7740 Back-end Hybrid
Three production TS7720 clusters all feeding into a common TS7740 Each TS7720 primarily replicates only to the common TS7740 Provides 1320TB of HIGH PERFORMANCE production cache when running in balanced mode The installed TS7740 performance features can be minimal since host connectivity wouldn’t be expected Production data migrates into the TS7740 If a TS7720 reaches capacity, the oldest data which has already been replicated to the TS7740 will be removed from the TS7720 cache. Copy export can be utilized at the TS7740 in order to have a second copy of the migrated data. Duration between copy exports can be longer since the last N days of data has not yet been migrated TS7720 LAN/WAN TS7740 TS7720 An exciting use of the 4 cluster grid is three TS7720 production clusters sharing a single TS7740 remote cluster. By replicating only to the TS7740, the host has access to the cache from the three TS7720s. This provides a very deep cache at the production site. Device Allocation Assist will help the host to direct a private mount to the TS7720 that has the volume in cache. If the volume is so old that it only exists in the TS7740, any of the TS7720s can access the volume via the grid links. Copy Export on the TS7740 can be used to make sure there are always two copies of the data. Copy Export TS7720 Production Capacity of 1320TB - IBM Confidential 56
56
Reference Materials Techdocs Redbooks TS7700 Infocenter www.ibm.com
Search for TS7700 whitepapers and presentations Redbooks Search on TS7700 TS7700 Infocenter Managing -> Management Interface z/OS DFSMS Object Access Method Planning, Installation and Storage Administration (PISA) Guide for Tape Libraries (SC )
57
Questions?
58
Trademarks The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both. Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by ® are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States. For a complete list of IBM Trademarks, see *, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, IBM®, IBM (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA, WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter® The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce. * All other products may be trademarks or registered trademarks of their respective companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography. - IBM Confidential 59
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.