Download presentation
Presentation is loading. Please wait.
Published byMargaret Hart Modified over 6 years ago
1
Charlie Burger (ATS) Steve Wilkins (z/VM Development)
DS8000 with z/VM Charlie Burger (ATS) Steve Wilkins (z/VM Development) 11/23/2018 - IBM Confidential
2
Trademarks The following terms are trademarks of International Business Machines Corporation in the United States, other countries or both. AS/400, DS6000, DS8000, DS Storage Manager, Enterprise Storage Server, FICON, FlashCopy, GDPS, IBM, iSeries, pSeries, RS/6000, RMF, IBM TotalStorage, VM/ESA, VSE/ESA, xSeries, z/OS, zSeries, z/VM, z/VSE On Demand Business Intel and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. Microsoft, Windows and Windows NT are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. - IBM Confidential
3
Thank You! Thanks to: Steve Wilkins (z/VM Development) for allowing me to use material he has developed and for reviewing the information for accuracy. - IBM Confidential
4
Web Sites of Interest z/VM V6R1.0 Information Center VM Parallel Access Volume (PAV) and HyperPAV Support Basis for most of the z/VM PAV material in this presentation What’s New in z/VM I/O Support? (Extended Address Volumes and FlashCopy SE) Basis for most of the z/VM EAV material in this presentation VM Parallel Access Volumes Support Introduction to PAV VM Parallel Access Volumes Support (Initial PAV Support) IBM HyperPAV Support on z/VM - IBM Confidential
5
Agenda Parallel Access Volumes (PAVs) FlashCopy
Modified Indirect Data Address Words (MIDAWs) High Performance FICON (zHPF) Extended Address Volume (EAV) - IBM Confidential
6
Parallel Access Volumes
(PAV) - IBM Confidential
7
Parallel Access Volumes (PAVs)
System z architecture allows only 1 active I/O to a single ECKD DASD device If there is an active I/O to the device, a new I/O request is queued Problematic as devices become larger Aliases overcome this restriction providing the ability to have multiple concurrent I/O operations on a single real DASD device Multiple device addresses for the same volume PAVs allow multiple reads executed in parallel PAVs allow multiple writes executed in parallel, serialized by extent specification in Define Extent command Allows higher I/O throughput by reducing I/O queuing - IBM Confidential
8
PAV Addressing Multiple unit addresses per volume Base Address
One base address, maximum of 255 of alias addresses New Device Control Block types PAV base and PAV alias Defined in ESS/DS8000 configuration and S/390 HCD / IOCP Base Address Actual unit address of the volume One base address per volume Space associated with base Alias address Maps to base address I/O to an alias address runs against the base No physical space associated with an alias - IBM Confidential
9
Parallel Access Volume Types
Static Association between PAV-Base and its PAV-Alias is predefined and fixed Dynamic Association between PAV-Base and its PAV-Alias in a dynamic pool z/OS WLM in goal Mode manages the assignment of alias addresses WLM instructs IOS when to reassign an alias z/OS only HyperPAV Feature of the DS8000 (R2.4) that removes the static Alias to Base binding associated with traditional PAVs Alias and Base volumes are pooled per each LSS. An Alias can be associated with any Base in the Pool; done by host on each I/O request. Makes traditional Dynamic PAV obsolete - IBM Confidential
10
IO without PAV DS8000 Storage Server z/VM Image z/VM Image
Applications do I/O to base volumes z/VM Image 0801 If device is BUSY then Queue I/O Applications do I/O to base volumes 0802 Logical Subsystem (LSS) 0800 Applications do I/O to base volumes Base UA=01 In the situation without PAV, the Volumes(s) containing the data set are provided by the catalog or allocation, and the UCB Address and extents are obtained upon open. IO is performed to the base UCB only and in the event the volume is busy (UCBBUSY bit is on), IO is queued to be retried later when the volume is not busy. This process can elongate response times for IO and in general, slow down application performance. The idea of a multiple exposure device has been around for more than 20 years (I saw it first in the s with 3350p devices), but Parallel access volumes were introduced as a licensed function of the ESS and continued with the DS6000 and DS8000 products. These not only alleviate a customer’s existing IOS queuing problems with applications, but also allow applications to co-exist on larger volumes without creating IOS queuing issues. 0801 If device is BUSY then Queue I/O Base UA=02 Applications do I/O to base volumes 0802 z/VM Image - IBM Confidential
11
Parallel Access Volumes
DS8000 Storage Server Applications do I/O to base volumes z/VM Image 08F1 08F0 0801 08F3 Applications do I/O to base volumes 08F2 0802 Logical Subsystem (LSS) 0800 Alias UA=F1 Applications do I/O to base volumes Alias UA=F0 08F1 Base UA=01 With PAVs, each base volume within a Logical Subsystem (LSS) can be configured with one or more aliases. Each exposure (base or alias) is capable of having an I/O active simultaneously. For example, a base volume with 2 aliases could have 3 I/Os started at the same time. Additional I/Os initiated to the base volume remain queued in z/OS until an exposure to it becomes available. Applications direct I/O to the base volume. If the base volume is busy during I/O initiation, z/OS will automatically select one of the alias exposures assigned to that base volume and start the I/O using that exposure. In the picture, the UCB is z/OS’s representation of a device. Base PAV devices are used by applications. Alias PAV devices are used only by z/OS. Alias exposures are relatively static today. During initialization, z/OS queries the storage server to determine which alias exposures belong to which base volumes. The ESS is used to provide the initial alias/base configuration. Over the life of an IPL of the storage server, this alias/base configuration can change due to I/O loads within the logical subsystem. WLM is used to determine if alias exposures should be reassigned to different bases, and will facilitate those alias moves to meet overall system workload goals. Each image within a sysplex will share the same view of the LSS, and therefore, the same alias/base relationships. If an alias is moved, all systems in the sysplex must reassociate the alias with its new base volume. 08F0 Alias UA=F3 0801 Alias UA=F2 Base UA=02 08F3 Applications do I/O to base volumes 08F2 0802 z/VM Image - IBM Confidential
12
HyperPAV DS8000 Storage Server z/VM Image z/VM Image
Applications do I/O to base volumes z/VM Image 08F3 P O L 08F2 0801 08F1 Applications do I/O to base volumes 08F0 Logical Subsystem (LSS) 0800 0802 Alias UA=F0 Alias UA=F1 Alias UA=F2 Alias UA=F3 Applications do I/O to base volumes z/VM Image Base UA=01 With HyperPAV technology, z/OS uses pools of aliases (by LSS). As each application I/O is requested, if the base volume is busy with another I/O, z/OS selects (and removes) a free alias from the pool, and starts the I/O to the base addess through the selected alias. When the I/O completes, the alias device is used for another I/O on the LSS or is returned to the free alias pool. If too many I/Os are started simultaneously, z/OS will queue the I/Os to at the LSS level. When an exposure frees up that can be used for queued I/Os, they are started. Queued I/O is done FIFO within assigned I/O priority. Notice that in each z/OS image within the sysplex, aliases are used independently. WLM is not involved in alias movement so it does not need to collect information to manage HyperPAV aliases. If each LPAR needs 20 aliases… with Parallel access volumes, you’s need 60 aliases for 3 LPARS. With HyperPAV, the same requirement can be serviced with 20 aliases. 08F0 0801 Base UA=02 08F1 Applications do I/O to base volumes P O L 08F3 0802 08F2 - IBM Confidential
13
Benefits of HyperPAV Reduce number of required aliases
Give back addressable device numbers Use additional addresses to support more base addresses larger capacity devices. React more quickly to I/O loads React instantaneously to market open conditions Overhead of managing alias exposures reduced WLM not involved in measuring and moving aliases Alias moves not coordinated throughout sysplex Initialization doesn’t require “static” bindings Static bindings not required after swaps IO reduction, no longer need to BIND/UNBIND to manage HyperPAV aliases Increases I/O Parallelism HyperPAV allows you to define aliases that are applicable to more devices (since they are not “statically” used). The total number of aliases needed in an LSS equals the maximum number of aliases needed by any attached LPAR. Actual mechanism for ALIAS/BASE association is IBM Proprietary and was added to the 2107 attachment specifications. - IBM Confidential
14
PAV Prerequisites for any Implementation
The DS8000 must have the PAV license. Specify the feature code that represents the CKD physical capacity allowed for the function. Logical configuration allowed up to the licensed capacity. HyperPAV requires an additional license Include the FICON/ESCON attachment feature. HyperPAV and PAV require FICON Activate the appropriate PAV feature code. Define the IOCP so that your configuration reflects and matches the planned DS8000 configuration. Configure the DS8000 with base and alias devices. Create a configuration on the DS8000 that matches the IOCP definition or planned IOCP definition. You can use the DS8000 GUI or DS CLI. Configure your operating system for base and alias devices - IBM Confidential
15
IOCP Definition PAV volumes can be defined as 3390 Model 2, 3, 9 (inc. mod 27 and 54) DASD on 3990 Model 3 or 6, 2105, 2107, or 1750 Storage Controllers Use 2107 for DS8000 3380 track-compatibility mode for 3390 Model 2 or 3 DASD are also supported. IOCP Statements CNTLUNIT Control units for Bases and Aliases are UNIT=3990, 2105, 2107, or 1750 IODEVICE Base UNIT=3390 or 3390B or 3380 or 3380B Alias UNIT=3390 or 3390A or 3380 or 3380A - IBM Confidential
16
IOCP PAV Example ********************************************************************** * DEFINE LOGICAL CONTROL UNIT * CNTLUNIT CUNUMBR=0701,PATH=(70,71,72,73),UNITADD=((00,128)), * LINK=(24,2D,34,3D),CUADD=1,UNIT=2107 * DEFINE BASE AND ALIASES ADDRESS ON LOGICAL CONTROL UNIT 1 * * BASE ADDRESS, 3 ALIASES PER BASE * IODEVICE ADDRESS=(A00,016),CUNUMBR=(0701),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(B00,048),CUNUMBR=(0701),STADET=Y,UNIT=3390A - IBM Confidential
17
DS8000 Logical Volume Configuration
Disk Subsystem configuration should match IOCP Use the HMC or DS CLI to initially define which subchannels are Base subchannels, which subchannels are Alias subchannels, and which Alias subchannels are initially associated with each Base - IBM Confidential
18
z/VM and Parallel Access Volumes
- IBM Confidential
19
VM Configuration A real Alias subchannel will not come online to VM without an associated real Base subchannel A real Base subchannel must have at least 1 associated real Alias subchannel for z/VM to recognize the device as a PAV Base subchannel Use the Class B, CP QUERY PAV command to view the current allocation of Base and Alias subchannels - IBM Confidential
20
Exploiting and Non-Exploiting Operating Systems
One that is capable of controlling the PAV architecture and is configured to control the features of PAV. z/OS, z/VM and Linux Non-Exploiting operating system One that is not configured to control the features of PAV or has no knowledge of the PAV architecture z/VM will provide PAV performance optimization across multiple non-exploiting guests Full pack minidisks are shared among guests with multiple LINK statements or… Multiple non-full-pack mindisk volumes reside on a real PAV volume Performance gains are achieved by transparently multiplexing the I/O operations requested on each guest minidisk volume over the appropriate real PAV base and alias subchannels - IBM Confidential
21
Exploiting Guest PAV Minidisk Typical Configuraiton
PAV Minidisk Configuration for Exploiting Guests, is a typical example of several guest virtual machines that exploit PAV volumes. One volume is a full-pack minidisk that is shared among the five guests (E100, PAK001), and there are four non-full-pack minidisk volumes (E200s and E300) that share the same underlying real PAV volume (PAK002). Note that there are more PAV minidisk volumes (5 MDISKs) than real volumes (2). z/VM will multiplex I/O operations on the real base and alias subchannels for each: - IBM Confidential
22
Non-Exploiting Guest PAV Minidisk Typical Configuration
In the configuration above, there are five links to real volume PAK001. For these five virtual PAV base subchannels, there is one real PAV base and three real PAV alias subchannels that will be used to perform the I/O. z/VM will concurrently multiplex the I/O from the VM Parallel Access Volumes Support GUEST1-GUEST5 E100 virtual bases onto the real 4580, 4581, 4582, and 4583 subchannels as they are available. Using this strategy, it is possible to have many guest virtual machines sharing a real DASD volume with z/VM dynamically handling the selection of real PAV base and alias subchannels. I/O operations to the minidisks defined on PAK002 will likewise be optimized by z/VM's dynamic selection of real PAV base and alias subchannels 4584, 4585, 4586, and 4587. - IBM Confidential
23
VM Configuration – Traditional (Prior to V5.2)
PAVs were traditionally supported by VM for guests as dedicated DASD Base and Alias devices could be dedicated to a single guest or distributed across multiple guests Configured to guest(s) with the CP ATTACH command or DEDICATE user directory statement Only for guests that exploited the PAV architecture, like z/OS and Linux - IBM Confidential
24
VM Configuration – Dedicated DASD
A Base and its Aliases may only be dedicated to one guest Still configured with CP ATTACH command or DEDICATE user directory statement For guests that exploit the PAV architecture - IBM Confidential
25
VM Configuration – minidisks
VM now supports PAV minidisks VM provides linkable minidisks for guests that exploit PAV (e.g., z/OS and Linux), see illustration Base minidisks are defined with the existing MDISK or LINK user directory statements (LINK command also supported) Aliases are defined with new PAVALIAS parameter of the DASDOPT and MINIOPT user directory statements or with the new CP DEFINE PAVALIAS command VM also provides workload balancing for guests that don’t exploit PAV (e.g., CMS) Real I/O dispatcher queues minidisk I/O across system attached Aliases Minidisks are defined as in the past; nothing has changed - IBM Confidential
26
ATTACH Command When attaching PAV DASD to a guest, the Base must be attached first before any associated Alias. An associated Alias can only be attached to the same guest as the Base. When attaching PAV DASD to the system, the Base must be attached first before any associated Alias. Aliases can be attached to the system and are exploited for VM I/O if they contain temporary disk (TDSK) or minidisk (PERM) allocations. Other CP volume allocations receive no benefit from system attached Aliases. - IBM Confidential
27
DETACH Command When detaching PAV DASD from a guest, all dedicated Aliases associated with a particular Base must be detached from the guest before the Base can be detached. When detaching PAV DASD from the system, all system attached Aliases associated with a particular Base must be detached from the system before the Base can be detached. - IBM Confidential
28
Minidisk Cache (MDC) Minidisk cache settings apply to the Base and are inherited by its Aliases SET MDCACHE command may not be used with Aliases; results in error - IBM Confidential
29
DEFINE PAVALIAS Privilege class G The DEFINE PAVALIAS command is used to create new virtual PAV Alias minidisks. Function can also be accomplished by using the DASDOPT and MINIOPT user directory statements. Newly defined virtual Alias is automatically assigned to a unique underlying real PAV Alias. The command will fail if no more unique real Aliases are available to be associated with the virtual Alias (per guest virtual machine). - IBM Confidential
30
QUERY VIRTUAL PAV Privilege class G - IBM Confidential
31
User Directory DASDOPT Used for Full-Pack Minidisks MINIOPT
MDISK vdev devtype DEVNO rdev mode DASDOPT PAVALIAS vdev MDISK vdev devtype 0 END volser mode DASDOPT PAVALIAS vdev-vdev LINK userid vdev1 vdev2 mode DASDOPT PAVALIAS vdev.numDevs MINIOPT Used for Non-Full-Pack Minidisks PAVALIAS option of DASDOPT and MINIOPT statements are used to create virtual PAV Alias minidisks for a guest. DASDOPT and MINIOPT should follow the MDISK or LINK statement associated with the virtual Base. DASDOPT and MINIOPT may be continued on multiple lines with trailing commas. Can have more Aliases in user directory than exist in hardware. Virtual Aliases will be assigned in ascending order until the real associated Aliases are exhausted. This will not prevent logon! Use DEDICATE vdev rdev for all dedicated PAV Base and Alias devices. - IBM Confidential
32
Dynamic PAV Dynamic PAV is the ability to re-associate an Alias device from one Base to another Guest issued dynamic PAV operation to a dedicated Alias: Real (and virtual) Alias to Base association will change as long as the new Base is dedicated to the same guest. Otherwise, the dynamic PAV operation will fail. Guest issued dynamic PAV operation to an Alias minidisk: Only the virtual configuration is altered if new virtual Base is the only minidisk on the underlying real Base and there is a unique real Alias available in which to associate the virtual Alias (per guest machine). Otherwise, the Dynamic PAV operation fails. The real Alias to Base association never changes for minidisks. Out-board (control unit) initiated dynamic PAV operations: All Alias minidisks associated with a real system attached Alias will be detached from their guests. A dedicated Alias will behave as if guest issued the dynamic PAV operation. - IBM Confidential
33
HyperPAV VM support for dedicated DASD and Fullpack Minidisks
For reference go to: - IBM Confidential
34
HyperPAV and z/VM VM dedicated DASD support via CP ATTACH command or DEDICATE user directory statement VM Minidisk Support: Workload balancing for guest’s that don’t exploit HyperPAV Linkable full-pack minidisks for guests that do exploit HyperPAV New CP DEFINE HYPERPAVALIAS command creates HyperPAV Alias minidisks for exploiting guests z/VM and z/OS are current exploiters of HyperPAV Restricted to fullpack minidisks for exploiting guests - IBM Confidential
35
Configuration HyperPAV Base and Alias subchannels are defined on control unit’s Hardware Management Console and in IOCP no differently than traditional PAVs HyperPAV hardware, priced feature enables floating Alias function associated with the HyperPAV architecture for each LSS (logical control unit) Operating system host determines which LSS (logical control unit) is in HyperPAV vs. traditional PAV mode - IBM Confidential
36
HyperPAV Usage Considerations
A real HyperPAV Alias subchannel will not come online unless a HyperPAV Base exists in the same hardware Pool. A real HyperPAV Base subchannel needs at least 1 HyperPAV Alias in the same hardware Pool for z/VM to recognize the device as a HyperPAV Base subchannel. Use the Class B, CP QUERY PAV command to view the current HyperPAV Base and Alias subchannels along with their associated Pools. - IBM Confidential
37
ATTACH/DETACH Commands
Unlike traditional PAV DASD, HyperPAV Base and Alias devices can be attached and detached to/from a guest or the system in any order. There is no Base before Alias (or vise-versa) restrictions. HyperPAV Aliases can be attached to the system and are exploited for VM I/O if they contain temporary disk (TDSK) or minidisk (PERM) allocations. Other CP volume allocations receive no benefit from system attached HyperPAV Aliases. - IBM Confidential
38
Minidisk Cache (MDC) Minidisk cache settings do not apply to HyperPAV Aliases. Cache setting are only applicable to HyperPAV Base devices. SET MDCACHE command may not be used with HyperPAV Aliases; results in error - IBM Confidential
39
DEFINE HYPERPAVALIAS Privilege class G The DEFINE HYPERPAVALIAS command is used to create new virtual HyperPAV Alias minidisks. A newly defined virtual Alias is automatically assigned to a unique underlying real HyperPAV Alias (in the same real hardware Pool as the Base). The command will fail if no more unique, real Aliases are available in the real hardware Pool to be associated with the virtual Alias (per guest virtual machine). There can only be 254 Aliases per Pool; and a limit of 16,000 Pools per image. Command restricted to Full-Pack minidisks. - IBM Confidential
40
QUERY VIRTUAL PAV Dedicated Non-Dedicated (minidisks)
- IBM Confidential
41
User Directory COMMAND
Use the COMMAND user directory statement with the DEFINE HYPERPAVALIAS command to create virtual HyperPAV Alias minidisks COMMAND statements must appear before all device definition statements, like MDISK and LINK statements for the Base minidisks Examples: COMMAND DEFINE HYPERPAVALIAS vdev FOR BASE basevdev MDISK basevdev devtype DEVNO rdev mode COMMAND DEFINE HYPERPAVALIAS vdev FOR BASE basevdev MDISK basevdev devtype 0 END volser mode COMMAND DEFINE HYPERPAVALIAS vdev FOR BASE basevdev LINK userid sourcevdev basevdev mode Use DEDICATE vdev rdev for all dedicated HyperPAV Base and Alias devices - IBM Confidential
42
Configuration File The following new system configuration file statements are useful in z/VM for managing HyperPAV devices: SYSTEM_Alias - Specifies HyperPAV Alias devices to be attached to the system at VM initialization. CU - Defines how VM initializes specific control units. Similar to the CP SET CU command (i.e., sets controller PAV mode). - IBM Confidential
43
FlashCopy - IBM Confidential
44
FlashCopy Overview “Instant” T0 (Time 0) copy
“Instant” T0 (Time 0) copy Source and target volumes immediately available for processing with full read/write access A hardware solution invoked by software z/OS DFSMSdss, TSO, ICKDSF, API, DS CLI, DS Storage Manager (GUI), TPC for Replication z/VM and z/VSE ICKDSF, host commands, DS CLI, DS Storage Manager, TPC for Replication - IBM Confidential
45
How does FlashCopy work?
Request copy from source to target FlashCopy relationship created between the volumes Target is available for processing once the relationship is created BACKGROUND COPY Tracks are copied from the source to the target Attempts to read/write data already copied proceed as normal Attempts to read a target track not already copied intercepted and data obtained from source Attempts to write a source track not already copied intercepted and source track copied to target before update occurs BACKGROUND NOCOPY Attempts to read a target track not copied intercepted and data obtained from source Attempts to write a source track not copied intercepted and source track copied to target before update occurs - IBM Confidential
46
FlashCopy Implementation – BACKGROUND COPY
Source Target Time Copy data command issued Copy immediately available Source Target Writes Read and write to both source and target possible When copy is complete, relationship between source and target ends BACKGROUND COPY - IBM Confidential
47
FlashCopy Implementation – BACKGROUND NOCOPY
Source Target Time Copy data command issued Copy immediately available Read and write to both source and target possible Source Writes Target Relationship between source and target exists until withdrawn or all tracks copied BACKGROUND NOCOPY - IBM Confidential
48
FlashCopy Prerequisites for any Implementation
FlashCopy (PTC) and FlashCopy SE (SE) are separate licenses. You can have either PTC or SE or both. SE does not require PTC. If you only have a FlashCopy SE license, all FlashCopy tasks must be NOCOPY even if the target is NOT a Space Efficient volume. The DS8000 must have the PTC and/or SE license. Specify the feature code that represents the physical capacity allowed for the function. Logical configuration allowed up to the licensed capacity. Activate the appropriate PTC and/or SE feature code. If there will be a mix of FB and CKD volumes, you could specify a capacity less than the actual total capacity and then activate it for either FB or CKD. - IBM Confidential
49
DS8000 FlashCopy Functions
Full volume FlashCopy (BACKGROUND COPY or NOCOPY) FlashCopy NOCOPY to COPY Persistent FlashCopy Incremental FlashCopy Inband FlashCopy Consistency Group FlashCopy Fast Reverse Restore Multiple Relationships Data Set FlashCopy (z/OS and z/VSE only) FlashCopy Space Efficient (separate feature from PTC) Target volume can be a Metro Mirror or Global Copy primary volume Must use parameters in the command that indicate you know that it is a primary Duplex volume falls into copy pending Target cannot be a primary in a Global Mirror or z/OS Global Mirror session - IBM Confidential
50
FlashCopy and ICKDSF FlashCopy BACKGROUND COPY and NOCOPY
ICKDSF can be used by z/OS, z/VM or z/VSE to invoke FlashCopy rather than native commands FlashCopy BACKGROUND COPY and NOCOPY Incremental FlashCopy Inband FlashCopy FRR Enabled FlashCopy FlashCopy Fast Reverse Restore (FRR) FlashCopy Space Efficient FlashCopy Query Remote Pair FlashCopy - IBM Confidential
51
z/VM and FlashCopy - IBM Confidential
52
FlashCopy Functions Available Native CP Command
Full volume FlashCopy (BACKGROUND COPY or NOCOPY) FlashCopy NOCOPY to COPY Persistent FlashCopy Incremental FlashCopy Fast Reverse Restore Multiple Relationships FlashCopy Space Efficient (separate feature from PTC) z/VM V6.1 or z/VM V5.4 with the PTFs for APARs VM64449, VM64605, and VM64684 - IBM Confidential
53
z/VM CP FlashCopy Commands
FLASHCOPY BACKGNDCOPY Convert NOCOPY to COPY FLASHCOPY ESTABLISH Allows for Persistent, Incremental, SE relationships and FRR FLASHCOPY RESYNC Establish a new increment (checkpoint) for a relationship established with CHGRECORD FLASHCOPY TGTWRITE Allow writes to a target whose relationship was established with NOTGTWRITE FLASHCOPY WITHDRAW - IBM Confidential
54
FLASHCOPY Privilege class B Source-target pairs supported by FLASHCOPY
Dedicated Full-pack Not full-pack Source: supported - IBM Confidential
55
FLASHCOPY BACKGNDCOPY
Privilege class B Usage Notes 1. This command has no effect if the relationship was not established with the implied or explicit NOCOPY option. 2. Issuing the command to any source cylinder within a relationship will cause all of the referenced relationship to begin its background copy operations. For example, if you specify: FLASHCOPY ESTABLISH SOURCE 200 TARGET NOCOPY FLASHCOPY ESTABLISH SOURCE 200 TARGET 320 NOCOPY FLASHCOPY BACKGNDCOPY SOURCE the background copying will begin for all cylinders for targets 300, 310, and 320. 3. Multiple executions of this command to the same source extent have no additional effect. 4. Copying of the data from the source to its targets might not occur immediately, depending on the availability of storage subsystem resources. 5. It is suggested, but not required, that the NOSETARGET operand be specified when establishing a relationship for use with this command. 6. If one or more affected targets are space efficient volumes, the command will be rejected by the hardware. However, this will NOT prevent the background copy from being started on any targets that are NOT space efficient. For non-space-efficient relationships, use FLASHCOPY BACKGNDCOPY to initiate background copying of unmodified source data to its targets. This command reverses the effect of the NOCOPY option of the FLASHCOPY ESTABLISH command. - IBM Confidential
56
FLASHCOPY ESTABLISH Privilege class B - IBM Confidential Usage Notes
1. A TARGET up to 12 vdevs can be specified with a limit of 110 extent descriptions shared among them. 2. Some operands are restricted to dedicated devices or full-pack minidisks. This can change as new hardware releases occur. 3. You cannot mix fullext and miniext specifications in a command, even if they describe areas of the same total space. 4. Each source cylinder can be the source for up to 12 relationships. This limit is enforced even when multiple FLASHCOPY ESTABLISH commands are issued. For example, if you specify: FLASHCOPY ESTABLISH SOURCE 0200 TARGET 0300 and FLASHCOPY ESTABLISH SOURCE 0200 TARGET 0310 you now have two of the 12 limit on This limit also applies to relationships established on other LPARS. 5. Specifying: FLASHCOPY ESTABLISH SOURCE 0200 TARGET is not quite the same as: The difference is that in the first case, there is one sequence number assigned to the relationship that encompasses all targets. In the second case, there are separate relationships, each with a unique sequence number. 6. Cylinders are copied in the order they are specified. For example, if you specify: FLASHCOPY ESTABLISH SOURCE TARGET cylinders 0 through 49 will be copied to cylinders 50 through 99, and cylinders 50 through 99 will be copied to cylinders 0 through 49. 7. A T-disk (DEFINE T ) cannot be used as a persistent FlashCopy source or target. 8. Persistent FlashCopy relationships are visible to all LPARs that are attached to the Storage LSS. 9. It is possible that a persistent relationship that is created on one LPAR can be withdrawn from another. 10. If both SAVELABEL and LABEL are omitted, the volume label is copied from vdev1 by the FlashCopy operation if source virtual cylinder 0 is copied onto target virtual cylinder 0. The volume label will be copied to the corresponding target cylinder. 11. When SAVELABEL or LABEL are specified, the volume label is always updated on target virtual cylinder 0, even if the target virtual cylinder 0 was not copied. 12. Failure of LABELwill not prevent the establishment of the relationships. 13. Failure of SAVELABEL will not prevent the establishment of relationships. 14. FLASHCOPY ESTABLISH is not supported on PAV or HYPERPAV ALIAS vdevs. 15. FlashCopy to space-efficient targets is only supported by hardware with full volume targets. 16. Due to the CP command line length limit of 240 characters, it might not be possible to specify the full 110 extents in some command entry modes. 17. When a relationship is created with the NOTGTWRITE option, any attempt to write on the volume (other than by FlashCopy itself) will place the volume into the Intervention Required state and possibly cause the writing virtual machine to stop. - IBM Confidential
57
FLASHCOPY RESYNC Privilege class B
Use FLASHCOPY RESYNC to resynchronize the specified target of a persistent checkpointed FlashCopy relationship to make the specified target consistent with the current state of the specified source and to establish a new checkpoint. The source volume is not changed. Checkpoints are established at the completion of the FLASHCOPY ESTABLISH CHGRECORD, FLASHCOPY ESTABLISH REVERSIBLE, AND FLASHCOPY RESYNC commands. Reversible RESYNC: A relationship can be reversed by specifying the corresponding source and target operands reversed from their specification during a prior ESTABLISH or RESYNC - IBM Confidential
58
FLASHCOPY TGTWRITE Privilege class B Usage Notes
1. This relationship will not be withdrawn. 2. Applying this command to a REVERSIBLE relationship makes it no longer reversible. Use FLASHCOPY TGTWRITE to enable writing on a target full volume that was write-inhibited with the FLASHCOPY ESTABLISH NOTGTWRITE option. - IBM Confidential
59
FLASHCOPY WITHDRAW Privilege class B Usage Notes
1. When the last target extent is withdrawn, the corresponding source relationships are withdrawn automatically. 2. If background source-to-target copies are pending and the relationship was established with NOCOPY, you must issue a separate FLASHCOPY BACKGNDCOPY command to initiate the copying process. The FORCE option will not initiate a background copy. 3. After a WITHDRAW FORCE, it is likely that the target will not be in a consistent state. This is because the background copy operation might not have completed and therefore some of the target tracks will not contain the correct data. Issuing a WITHDRAW FORCE during a point-in-time instant backup will likely lead to a backup tape that is not usable. 4. If the number of remaining tracks to be copied is unknown, the value is computed as the size of the extent plus one track. 5. Due to the CP command line length limit of 240 characters, it might not be possible to specify the full 110 extents in some command entry modes. 6. FORCE is required for space-efficient targets. 7. The FORCE option does NOT initiate a background copy operation. Use FLASHCOPY WITHDRAW to remove a persistent FlashCopy relationship. - IBM Confidential
60
Modified Indirect Data Address Words
(MIDAWs) - IBM Confidential
61
What is the MIDAW Facility?
Improves the performance of sequential I/Os using 4K datasets, especially with Extended Format Eliminates the EF performance penalty and shrinks the small DB2 page size performance penalty MIDAWs implemented by Media Manager Lower utilization of the FICON channel, link, and CU host adapter - IBM Confidential
62
Channel Programming… IDAWs and data chaining are features of channel programming that have been around for many years IDAWs have some 4K boundary restrictions which make them inappropriate or inefficient for 32 byte EF dataset suffixes “data chaining” can be used to chain storage addresses to each other that do not have continuous virtual addresses in z-series memory but each new piece of data in the chain requires a separate CCW in the channel program MIDAWs simply removes the 4k boundary restrictions from IDAWs so that we no longer need to use data chaining for the 32 byte suffixes - IBM Confidential
63
The problem that MIDAWs address in a little more detail
Extended Format data sets pages include a 32-byte suffix Non-EF ones don’t Indirect Data Access Words (IDAWs) have no length so have to transfer as 4KB elements This generally forces some of the data transfer not to be aligned on a 4KB page boundary Lack of page alignment forces an additional CCW Therefore more traffic to transfer the same amount of actual data Hence lower channel efficiency for EF Transfer rates degraded 4KB 24 CCWs required to transfer a track for EF 12 CCWs for non-EF - IBM Confidential
64
How MIDAWs address this problem
MIDAWs are MODIFIED IDAWs Contain new length field Allows for non-page alignment Non-page alignment support means extra CCW not needed That extra CCW bigger proportion of total traffic for smaller transfers eg 4KB vs 16KB Reduces the number of CCWs in a channel program Channel efficiency improves Transfer rates improve also At 4KB 1 CCW to transfer a track whether EF or not - IBM Confidential
65
Pre-Midaw EF datasets Non-EF datasets …32 CCW’s in total
CCW READ 4K CCW READ 4K CCW READ 32 byte suffix CCW READ 4K CCW READ 4K CCW READ 4K CCW READ 32 byte suffix CCW READ 4K CCW READ 4K CCW READ 4K CCW READ 32 byte suffix CCW READ 4K CCW READ 4K CCW READ 4K CCW READ 32 byte suffix CCW READ 4K CCW READ 4K CCW READ 32 byte suffix …32 CCW’s in total …64 CCW’s in total - IBM Confidential
66
EF datasets pre-MIDAWs
EF Data Sets CCW Counts EF datasets pre-MIDAWs EF or non-EF datasets With MIDAWs CCW READ 4K CCW READ 32 byte suffix Track level CCW CCW READ 4K CCW READ 32 byte suffix Track level CCW CCW READ 4K CCW READ 32 byte suffix Track level CCW CCW READ 4K Track level CCW CCW READ 32 byte suffix 3 or 4 CCW’s in total CCW READ 4K CCW READ 32 byte suffix …64 CCW’s in total - IBM Confidential
67
MIDAW improves the performance in two ways
Simplex effects (i.e. single stream): This benefit is exclusive to EF datasets Multiplexing effects (i.e. parallel I/O): This benefit affects both EF and non-EF datasets Single stream and multiplexing benefits are additive EF benefits are much greater than non-EF benefits There is no non-EF benefit unless multiple streams share the channel - IBM Confidential
68
A Final “Perspective” on MIDAWs
MIDAW allows non-page aligned data to be transferred with less traffic Improves channel efficiency And data rates Reduces EF penalty versus non-EF Reduces small DB2 page size penalty Allows EF data sets with less inhibition Supports DB2 Active Log striping better Supports > 4GB Table Space data sets better Combined with 1024 partitions in DB2 Version 8 increases the limits on Table Space sizes As used by DB2 Utilities improves times to manage these bigger Table Spaces Red Paper “How does the MIDAW Facility Improve the Performance of FICON Channels Using DB2 and other workloads?” - IBM Confidential
69
FICON I/O Protocol Hierarchy
If the IO is zHPF eligible and to a device that supports zHPF, the IO will be zHPF If the IO is MIDAW eligible and to a device that supports MIDAW, the IO will be MIDAW If the IO is neither zHPF or MIDAW eligible, then the IO will be standard FICON MIDAW IO zHPF IO The outlined box represents the universal set of all IO possible to a device, with zHPF and MIDAW labeled octagons representing their sets. The sizes of these boxes are not intended to be proportional to any customer workload. They are intended to represent the relationships between TCW, CCW and MIDAW (which is a subset of CCW). All zHPF IO uses TCW. In this way, zHPF is mutually exclusive with MIDAW and Standard FICON which both use CCW. It's possible that when zHPF is enabled, that we may convert an IO that was previously using MIDAW or FICON to zHPF, but never the other way around (unless we encountered an error with the first attempt using TCW). FICON IO - IBM Confidential
70
z/VM and MIDAW Modified Indirect Data Address Words (MIDAW) support
Allows guest use of MIDAWs when z/VM is running on MIDAW-capable servers z/Architecture MIDAW facility offers an alternative to using CCW data chaining in channel programs May reduce channel, director, and control unit overhead by reducing number of CCWs and frames that have to be processed May improve I/O throughput, especially on faster FICON channels Allows z/OS guests to exercise their MIDAW support in a z/VM test environment CP external interfaces updated: TRACE I/O command response TRACERED utility formatted I/O data DIAGNOSE code X'210' output - IBM Confidential
71
High Performance FICON for System z
(zHPF) - IBM Confidential
72
High Performance FICON Introduction
High Performance FICON for System z (zHPF) is a new data transfer protocol that is optionally employed for accessing data from an IBM DS8000 storage subsystem Data accessed by DB2, PDSE, VSAM, zFS and Extended Format SAM can benefit from the improved transfer technique Single track introduced with R4.1 Multi-track introduced with R4.3 zHPF may help reduce the infrastructure costs for System z I/O by efficiently utilizing I/O resources so that fewer CHPIDs, fibers, switch ports and control unit ports may be needed zHPF also compliments the System z EAV strategy for growth by increasing the I/O rate capability as the volume sizes expand vertically Move forward two slides to the chart titled High Performance FICON Introduction. As stated, High Performance FICON (aka zHPF) is a new data transfer protocol for accessing data on a DS8000 from a z10 host. Today the DS8000 is the only disk system that supports this protocol. At this time only certain access methods are supported by High Performance FICON. Since High performance FICON enables higher throughput per channel than FICON, the major potential benefit of zHPF is infrastructure reduction. - IBM Confidential
73
High Performance FICON (zHPF)
Improve FICON Scale, Efficiency and RAS As the data density behind a CU and device increase, scale I/O rates and bandwidth to grow with the data Significant improvements in I/O rates for OLTP Multi-track support with R4.3 Improved I/O bandwidth New ECKD commands for improved efficiency Improved first failure data capture Additional channel and CU diagnostics for MIH conditions Value Reduce the number of channels, switch ports, control unit ports and optical cables required to balance CPU MIPS with I/O capacity Reduce elapsed times (DB2, VSAM) 2X - IBM Confidential
74
High Performance FICON Highlights (continued)
Compatibility Between Existing CCWs and New TCWs Bilingual Channel and Control Unit Ports CCWs continue to use FICON protocols TCWs use new Transport Mode Protocols DS8000 Code Structure Optimized for Simple I/O Chains No CKD operation (ECKD only) Streamlined Internal Communication Protocols (Equivalent to FCP Exchanges) Improved RAS and Workload Management Additional channel and control unit diagnostics for MIH conditions I/Os are queued in control unit when a device is reserved by another host Move to the next chart titled High Performance FICON Highlights (continued). As mentioned earlier, the DS8000 is bilingual meaning that it can handle a mix of CCWs and TCWs concurrently. Note that only ECKD operations are eligible for zHPF. CKD operations will continue to use CCWs. zHPF also helps the DS8000 function more efficiently by streamlining internal communications which become more comparable to FCP exchanges in terms of overhead. This results in less SMP overhead. Finally, please be aware that zHPF is a strategic direction for system z I/O. Watch this space for future developments in this area. - IBM Confidential
75
Link Protocol Comparison for a 4KB READ
CHANNEL CONTROL UN I T OPEN EXCHANGE, PREFIX CMD & DATA READ COMMAND CMR 4K of DATA STATUS CLOSE EXCHANGE FICON C H A N E L O N T R O L U I T OPEN EXCHANGE, send a Transport Command IU 4K OF DATA Send Transport Response IU CLOSE EXCHANGE zHPF The next chart titled Link Protocol Comparison for a 4KB READ helps illustrate the reduction in handshakes that zHPF brings to the table. For those familiar with FCP protocol, you will see a lot of similarities. The simpler protocol enables the big gains in throughput that zHPF enables. zHPF provides a much simpler link protocol than FICON - IBM Confidential
76
zHPF Exploitation The Media Manager component of z/OS builds a new type of Channel Program (TCW instead of CCW) Only data accessed by DB2, PDSE, VSAM, zFS, VTOC Index (CVAF), Catalog BCS/VVDS or Extended Format SAM benefit from the enhanced transfer technique. The current architecture is set up for: Unidirectional access – must be all read or all write Read count suffix on write chains can not be translated to a zHPF TCW Correct Counts Searches are not supported Non-Extended Format BSAM and QSAM are not currently supported The next chart labelled zHPF Exploitation explains the access methods and I/O types that are currently supported by zHPF. Essentially all Media Manager I/O are supported as long as the I/O does not cross a track boundary. I/O chains that contain both a read and write operation are not supported and correct counts are required. Searches and non-extended format BSAM and QSAM are also not supported. - IBM Confidential
77
z/VM and Ficon Express 8 High Performance FICON for System z (zHPF)
z/VM supports FICON Express8 on the System z10 EC and System z10 BC family of servers FICON Express 8 supports a link data rate of 8 gigabits per second (Gbps) and autonegotiation to 2, 4, or 8 Gbps for synergy with existing switches, directors, and storage devices Support for: Native FICON Fibre Channel Protocol (FCP) The FICON Express8 features are exclusive to IBM z10 EC and z10 BC servers. - IBM Confidential
78
Extended Address Volumes
(EAV) - IBM Confidential
79
Problem and Solution that EAV is Addressing
Running out of System z addressable disk storage 4-digit device number limit is fast approaching Rapid data growth on the System platform is leading to a critical problem for some of our customers Business Resilience solutions for continuous availability also drives this limit IBM’s solution to this problem is to: Continue the direction started with the of defining larger volumes by increasing the number of cylinders Extend the number of cylinders per device to be >65,520 This builds upon relief provided by: Parallel Access Volumes (PAV) - alias device numbers in an alternative subchannel set. HyperPAV Space Efficient Flash Copy Rapid data growth on the z/OS platform is leading to a critical problem for some of our customers - (37% CGR Disk Storage Growth ’96-’07). As a result for many customers the 4-digit device number limit is becoming a real constraint to growing data on z/OS. In addition Business Resilience solutions (GDPS, HyperSwap, PPRC, etc) that provide continuous availability are also driving this constraint. Our solution to this problem is to provide larger volumes (by several factors) by increasing the number of cylinders beyond 65,520 cylinders. We will grow volumes with larger number of cylinders (millions) by implementing a new track addressing method. This method will be discussed in the following charts. This relief builds upon prior technologies that were implemented in part to help reduce the pressure on running out of device numbers. These include PAV, HyperPAV, SEFC. - IBM Confidential
80
EAV: A volume with more than 65,520 cylinders
Extended Address Volume (EAV) is the Next Step in Larger CKD Volumes 3390-A “EAV” EAV: A volume with more than 65,520 cylinders The HyperPAV function complements this design by scaling the I/O rates against a single volume 3390 Model A: A device configured to have 1 to many cylinders 3390-9 3390-9 The Extended Address Volume (EAV) is the next step in providing larger volumes for z/OS. We will provide this support in z/OS version 1 release 10 of the operating system. Over the years we have grown volumes by increasing the number of cylinders and thus GB capacity. However, the existing track addressing architecture has limited our growth to relatively small GB capacity volumes which has put pressure on the 4-digit device number limit. z/OS V1R10 will be GA by 3Q08 With EAV, we are implementing an architecture that will provide capacities of 100’s of TeraBytes for a single volume. However, the first release will be limited to a volume with 223GB or 262,688 cylinders. This was done to help reduce our initial EAV testing requirements in order to get a solution out to the field. This will also allow us to understand and address possible performance concerns before very large volumes hit the field. An EAV is defined to be a volume with more than 65,520 cylinders. A volume of this size has to be configured in the DS8000 as a 3390 Model A. However a 3390 Model A is not always an EAV. A 3390 Model A is any device configured in the DS8000 to have from ,434,453 cylinders Model A support is provided in DS versions, while the EAV support will be provided in a subsequent release. An EAV in the way it is managed by system z allows it to be a general purpose volume. It will work especially well for applications with large files. PAV and HyperPAV technologies help in this regard by allowing I/O rates to scale as a volume gets larger. Definition of an EAV: extended address volume (EAV). A volume with more than 65,520 cylinders. Only 3390 Model A devices can be an EAV. 3390-9 3390-3 100s of TBs 3GB Max cyls: 3,339 9GB Max cyls: 10,017 27GB Max cyls: 32,760 54GB Max cyls: 65,520 Size limited to to 223GB (Max cyls 262,668) Maximum Sizes - IBM Confidential
81
Track Addresses Existing track address format with 16-bit cylinder number CCCCHHHH 16-bit track number 16-bit cylinder number Today's supported maximum size volume is 65,520 cylinders, near the 16-bit theoretical limit of 65535 To handle cylinder numbers greater than 65,520, a new format for the track address is required Existing track address is described with CCCCHHHH, showing nibbles. This track address is a 32 bit number that identifies each track within a volume. Thus the cylinder and track number uses a 16-bit number. However, for the track number only the low order 4-bits are used, where the high order 12-bits of the track number is not. The maximum supported volume size today is 65,520 cylinders, this is near the 16-bit theoretical limit of cylinders. Thus to handle cylinder numbers greater than 65,520, a new format for the track address is required - IBM Confidential
82
New Format 28-bit Cylinder Numbers
New cylinder-track address format CCCCcccH H is 4-bit track number (0-14) ccc is high-order 12 bits of 28-bit cylinder number CCCC is low-order 16 bits of 28-bit cylinder number Large cylinder addressing is done by “stealing” 12 bits from head value of CCHH identifier for use as high-order bits to cylinder The cylinder number is in a non-linear form This format preserves the 3390 track geometry Track addresses for space <65,520 will be compatible to today’s track addresses Track addresses for space in >65,520 will NOT be compatible to today’s track addresse No DASD has ever had more than 15 heads (x0000-x000E) The new track addressing method is described with CCCCcccH notation, showing nibbles of 4-bits each. This track address is a 32 bit number that identifies each track within a volume. It is in the format hexadecimal CCCCcccH, where CCCC is the low order 16 bits of the cylinder number, ccc is the high order 12 bits of the cylinder number, and H is the four-bit track number. For compatibility with older programs, the ccc portion is hexadecimal 000 for tracks in the base addressing space. This track address method is referred to as a 28-bit cylinder number. A few key points The cylinder number is in a non-linear form, a program or human reading this track address need to re-arrange the bits This format preserves the 3390 track geometry Track addresses for space in track-managed space will be compatible to today’s track addresses Track addresses for space in cylinder-managed space will NOT be compatible to today’s track addresses To assist in the understanding of 28-bit cylinder numbers a Normalized cylinder-track address may be used for printing. This is where the bits are re-arranged to a more readable format and a linear 28-bit cylinder number. The presence of a colon before the last nibble identifies the track address as being normalized. cccCCCC:H H is 4-bit track number (0-14) cccCCCC is 28-bit cylinder number in a linear form Note that not all messages/reports will normalize the output. For example, IDCAMS LISTCAT reports in the native format. Others do the same. These track addresses in existing channel programs and extent descriptors in DSCBs and elsewhere are in the form of this 28-bit cylinder number. This means a program can not reliably use simply arithmetic operations (other than compare equals) when manipulating track addresses. This will be discussed in the next page. Terminology “Absolute track addresses” generally are represented in the documentation as cchh or bbcchh and the term generally is shortened to track addresses. Code that references these often must be changed. Track addresses in existing channel programs and extent descriptors in DSCBs and elsewhere are in the form of CCHH, where CC is the 16-bit cylinder number and HH is the 16-bit track number in that cylinder. If the volume is an EAV, the cylinder number in these four CCHH bytes will be 28 bits and the track number will be four bits. For compatibility reasons, the 32 bits in each track address on an EAV will be in this format: CCCCcccH The 12 high order bits of the cylinder number are in the high order 12 bits of the two old HH bytes. This format might be written as CCCCcccH. You can compare two of these track addresses for equality but you cannot reliably use a simple comparison for greater than or less than. Any arithmetic must take this special format into consideration. An “absolute block address” is of the form cchhr or mbbcchhr and includes an absolute track address. Often they are written in capitals in the documentation as CCHHR or MBBCCHHR. “Relative block addresses” are written as TTR or TTTR. The TT or TTT bytes count tracks relative to something. If they are relative to the beginning of a data set and inside the data set, they are unaffected by this project. If they are relative to the beginning of the volume, they might be affected by this project. Relative block addresses often are referred to as being relative track addresses, although that term should be reserved for a two, three or four byte number that counts tracks relative to the logical beginning of a data set and inside its extents or a number that is relative to the beginning of the volume. - IBM Confidential
83
Normalized Cylinder-Track Address
Normalized cylinder-track address (to be used only for printing) cccCCCC:H H is 4-bit track number (0-14) cccCCCC is 28-bit cylinder number in a linear form The colon shows that it is a normalized address The new track addressing method is described with CCCCcccH notation, showing nibbles of 4-bits each. This track address is a 32 bit number that identifies each track within a volume. It is in the format hexadecimal CCCCcccH, where CCCC is the low order 16 bits of the cylinder number, ccc is the high order 12 bits of the cylinder number, and H is the four-bit track number. For compatibility with older programs, the ccc portion is hexadecimal 000 for tracks in the base addressing space. This track address method is referred to as a 28-bit cylinder number. A few key points The cylinder number is in a non-linear form, a program or human reading this track address need to re-arrange the bits This format preserves the 3390 track geometry Track addresses for space in track-managed space will be compatible to today’s track addresses Track addresses for space in cylinder-managed space will NOT be compatible to today’s track addresses To assist in the understanding of 28-bit cylinder numbers a Normalized cylinder-track address may be used for printing. This is where the bits are re-arranged to a more readable format and a linear 28-bit cylinder number. The presence of a colon before the last nibble identifies the track address as being normalized. cccCCCC:H H is 4-bit track number (0-14) cccCCCC is 28-bit cylinder number in a linear form Note that not all messages/reports will normalize the output. For example, IDCAMS LISTCAT reports in the native format. Others do the same. These track addresses in existing channel programs and extent descriptors in DSCBs and elsewhere are in the form of this 28-bit cylinder number. This means a program can not reliably use simply arithmetic operations (other than compare equals) when manipulating track addresses. This will be discussed in the next page. Terminology “Absolute track addresses” generally are represented in the documentation as cchh or bbcchh and the term generally is shortened to track addresses. Code that references these often must be changed. Track addresses in existing channel programs and extent descriptors in DSCBs and elsewhere are in the form of CCHH, where CC is the 16-bit cylinder number and HH is the 16-bit track number in that cylinder. If the volume is an EAV, the cylinder number in these four CCHH bytes will be 28 bits and the track number will be four bits. For compatibility reasons, the 32 bits in each track address on an EAV will be in this format: CCCCcccH The 12 high order bits of the cylinder number are in the high order 12 bits of the two old HH bytes. This format might be written as CCCCcccH. You can compare two of these track addresses for equality but you cannot reliably use a simple comparison for greater than or less than. Any arithmetic must take this special format into consideration. An “absolute block address” is of the form cchhr or mbbcchhr and includes an absolute track address. Often they are written in capitals in the documentation as CCHHR or MBBCCHHR. “Relative block addresses” are written as TTR or TTTR. The TT or TTT bytes count tracks relative to something. If they are relative to the beginning of a data set and inside the data set, they are unaffected by this project. If they are relative to the beginning of the volume, they might be affected by this project. Relative block addresses often are referred to as being relative track addresses, although that term should be reserved for a two, three or four byte number that counts tracks relative to the logical beginning of a data set and inside its extents or a number that is relative to the beginning of the volume. - IBM Confidential
84
Extended Address Volumes and z/VM
- IBM Confidential
85
z/VM EAV Support Two APARs for z/VM 5.4.0 and 6.1.0
VM64709 (CP) VM64711 (CMS) EAV volumes can be attached to the SYSTEM, generally for the purposes of minidisks Non-PERM extents on CPOWNED EAV volumes are restricted to the first 65,520 cylinders Extents that cross this line will be truncated, with message HCP138E Extents that exist entirely above this line will be ignored, with message HCP139E ●CPFMTXA will enforce this boundary requirement - IBM Confidential
86
z/VM EAV Minidisk Support
z/VM support for dedicated devices, and fullpack minidisks “Fullpack” is defined as 0-END, or DEVNO The ending cylinder must be “END”, even if the number would equal the size of the volume Partial pack minidisks on 3390-A volumes are supported, provided they exist completely below cylinder 65,520 - IBM Confidential
87
z/VM EAV Miscellaneous Support
New DDR fully supports EAV volumes Updates to FlashCopy ensure it fully supports EAV volumes New fields in MRSEKSEK (Domain 7, Record 1) monitor record less likely to wrap for larger volumes CALCURCY, CALSKCYL, IORPOSSM, CALECYL - IBM Confidential
88
z/VM CMS and EAV CMS currently supports volumes up to 32,767 cylinders in size With aforementioned APAR, CMS is updated to support, via FORMAT and ACCESS, volumes and minidisks up to 65,520 cylinders in size Since this does not provide support for EAV-sized volumes, can be applied separately from CP APAR Still, does not eliminate the requirement of file status and control information that is stored below the 16MB line of the CMS virtual machine If volumes larger than 32,767 cylinders in size are used, care should be taken to avoid large numbers of small files on the disk. Otherwise, the CMS file system may encounter problems accessing and using the device. - IBM Confidential
89
FCP-attached SCSI disks
DS8000 Function Support Feature Supported by z/VSE… Supported by z/VM… Linux for System z System z Channel Attachment: FICON 3.1 and later All supported releases All supported distros FCP-attached SCSI disks ECKD Format 3380 3390 Extended Address Volumes (EAV) (>65520 cyl) Not supported V5.4 and V6.1 with APARs VM64709 (CP) and VM64711 (CMS) and later Yes, when running as a VM guest. Large Volume Support (65520 cyl) 4.1 and later PAV Support Static 4.2 with APARs DY46953/PTF UD53399 (01C) and UD53400 (01J) and later SLES10, RHEL4 & 5 Dynamic Guest support only HyperPAV SLES11, RHEL6 zHPF Planned for future distributions MIDAWs Dynamic Volume Expansion - IBM Confidential
90
Multiple Relationships z/OS Global Mirror (XRC)
DS8000 Function Support Feature Supported by z/VSE… Supported by z/VM… Linux for System z FlashCopy Support Not supported NOCOPY 3.1 and later All supported releases NOCOPY to COPY V6.1 or V5.4 with PTFs for APARs VM64449, VM64605, and VM64684 Persistent FC V6.1 or V5.4 with above PTFs Incremental FC Inband FC Consistency Group FC Fast Reverse Restore Multiple Relationships Data Set FC FC SE 4.2 and later Target volume MM or GC All supported releases Remote Mirror and copy Global Copy 3.1 and later (ICKDSF) All supported releases (ICKDSF) Metro Mirror Global Mirror Metro/Global Mirror z/OS Global Mirror (XRC) Managed by native GDPS z/OS LPAR. V6.1 or V5.4 with PTFs for APARs VM64814, VM64815, & VM64816 - IBM Confidential
91
Disclaimer - IBM Confidential
Copyright © 2011 by International Business Machines Corporation. No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation. Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This information could include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or programs(s) at any time without notice. Any statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. References in this document to IBM products, programs, or services does not imply that IBM intends to make such such products, programs or services available in all countries in which IBM operates or does business. Any reference to an IBM Program Product in this document is not intended to state or imply that only that program product may be used. Any functionally equivalent program, that does not infringe IBM's intellectually property rights, may be used instead. It is the user's responsibility to evaluate and verify the operation of any on-IBM product, program or service. THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein. The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY U.S.A. - IBM Confidential
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.