Presentation is loading. Please wait.

Presentation is loading. Please wait.

IBM System z Technical University – Berlin, Germany– May 21-25

Similar presentations


Presentation on theme: "IBM System z Technical University – Berlin, Germany– May 21-25"— Presentation transcript:

1 IBM System z Technical University – Berlin, Germany– May 21-25
Session code zZS25 ***** PRELIMINARY VERSION V ***** Software Deployment Customer Experiences Tim Alpaerts, Euroclear Legend for the presenter : !=> : this symbol means that there is additional information to provide orally. )=> : All the notes are providing additional information to give orally (avoids to repeat the !=> on every line). The lines with !=> and )=> are in blue. Attention : the colour in notes is not visible in ‘Normal’ view of powerpoint.

2 Agenda Terminology The old situation The new design
The Product Catalog in TDSL Non-uniform software stacks TDSL Product Selection & Generate (build) processes The roll-out (restore) process VSAM challenges zFS on the installation system This the agenda I’d like to cover. First I’ll try to explain the context of our original software distribution problem, why and how we redesigned it, the application that controls it and a number of practical problems.

3 Terminology Installation system: one particular MVS system dedicated to installing software and installing fixes. End-User system: all MVS systems other than the installation system. Dead libraries: product libraries created on the installation system by software installation, (maybe SMP/E installed or non-SMP/E installed) not actively in use by an instance of the product. Retrofit: after making a local correction on an end-user system, it needs to be incorporated into the dead libraries for future builds. Version-less dsnames: product library dsnames that do not imply a particular version or release of a software, they do not contain a version release qualifier. Non-Uniform software stacks: selectively activating only the needed parts of the complete enterprise software stack on every system, mainly for the purpose of reducing the license charge. I included this slide because some proof readers complained that presentation was incomprehensible Bullet 1: the Installation system refers to one particular MVS system dedicated to installing software and maintenance Bullet 2: End User systems are all systems other than the installation system Bullet 3: Dead libraries are product libraries created on the installation system by software installation, (SMP installed or non-SMP installed) but not actively in use by an instance of the product. !=> Bullet 4: Retrofitting is incorporating a fix into the dead libraries after is has been activated & tested on an End-User system. !=> The aim is to avoid a single problem reoccurring in successive software deployments, avoiding to diagnose & fix the same problem over and over again Bullet 5: Version-less dsnames means product library dsnames that do not imply a particular version or release of a software, they do not contain a version release qualifier. Bullet 6: Non-Uniform software stacks: means we activate only selected software products, on those systems where they are used, mainly for the purpose of reducing the software license charge.

4 Agenda Terminology The old situation The new design
The Product Catalog in TDSL Non-uniform software stacks TDSL Product Selection & Generate (build) processes The roll-out (restore) process VSAM challenges zFS on the installation system I’d like to show you what the initial situation was before we started restructuring our software deployment and some of the problems we used to have. I wrote ‘Old situation’ instead of “The old design” because it was more evolution then design.

5 The old situation, overview
Pre-production Brussels The old situation, overview Development systems Production systems K-systems each group of systems: Test systems Paris Active MVS target & dlibs Inactive MVS target & dlibs MVS SMPE, within each group target zones, non-IBM DASD was shared with CA-MIM dlib zones products Pre-production Paris IBM IBM Test systems Brussels )=> This is a representation of what software deployment looked like before we restructured it In this drawing the egg-shaped things represent sets (Venn diagrams) or groups of z/OS images, the little squares represent the z/OS images Within each group of systems disks were shared with CA-MIM. Each group of systems has an active and inactive IPL-disk. The MVS SMP is set-up with a pair of TARGET & DLIB zones, one for the active and one for the inactive IPL-disk. In each zone all DDDEF’s are defined with volsers. The DDDEF’s contain the actual dsnames in use by the driving system. Important consequence of this set-up: over time things diverge, they become more and more different because there is no common origin, Software maintenance is applied independently in multiple groups. SMPE

6 Some issues with the old set-up
Version-release qualifiers in dsnames of product libraries make system upgrades risky. Enqueue and reserve contention interference between lpars, instability from dead-locks. No inventory nor overview of what system is using what software releases. No obvious distinction between software product libraries and databases and work datasets. Locally maintained PROGxx parmlib members accumulate errors over time Low confidence when doing IPL. These are some of the problems we had in the old set-up: Bullet 1: Because dsnames of product libraries contain version-release qualifiers system upgrades are cumbersome and risky !=> Impact analysys and scan replace are required, but… !=> There is always a JCL or a Rexx procedure with a hardcoded VR qualifier somewhere, waiting to bite you after a system upgrade. Bullet 2: Enqueue and reserve contention interference between lpars, instability from dead-locks. Due to bad placement of datasets, reserves not being converted, multiple systems connect to the same dasd. Bullet 3: There is no overview of what system is using what software releases, !=> this complicates in particular license management and managing the end-of-support dates (no systematic check). !=> also complicates cleanup and space management, logging of upgrade dates per system, ensuring all systems were upgraded, etc. !=> Requires SMPE investigation on production systems to figure out which version is used.

7 Agenda Terminology The old situation The new design
The Product Catalog in TDSL Non-uniform software stacks TDSL Product Selection & Generate (build) processes The roll-out (restore) process VSAM challenges zFS on the installation system The list of problems is non-exhaustive, not all problems are listed on the previous slide. In the time frame we redesigned the entire software deployment.

8 Objectives for the new design (1)
Software installation needs to create dsnames (dead libraries) different from those the driving system is using. All software installation work concentrated on one single system dedicated for this purpose. All product libraries with version-less dsnames, to avoid widespread impact from system upgrades. Distinction between R/O product libraries and R/W databases immediately apparent from the dsname. Complete isolation of DASD between lpars in the IODF, no more cross-plex sharing of dasd. This is what we want out of the new design: Bullet 1: Software installation needs to create other dsnames (dead libraries) than the ones the driving system is using. This should avoid installation work interfering with the driving system. !=> Bullet 2: Previously all software installation was done on a busy application !=> development system with 200 to 300 TSO users logged-on. In order to avoid impacting a large user community with enqueues and reserves we want to concentrate all software installation work on one single system dedicated for this purpose. Bullet 3: Library dsnames don’t change, but the contents could be one release or another. !=> i.e. When you see a SYS1.LINKLIB, the dsname doesn’t tell you if you’re !=> looking at z/OS 1.13 or OS/360; that is implied by the contents. We want that concept for all libraries from all software products. Bullet 4: For all software we want to be able to distinguish dataset names immediately between R/O product libraries and R/W databases and workfiles. Bullet 5. Instead of sharing disks cross plex we want isolated parallel sysplexes, each operating in STAR-mode GRS, no more need for MIM.

9 Objectives for the new design (2)
All software, OS and all software products are upgraded in one go. Upfront agreement on scope and planning (IPL slots). Integrated testing. All the “hold actions” are performed in parallel and are repeatable. Provide a ‘perfect’ APF-list and link-list, complete but without any needless entries. Ensure at all times we have an SMP perfectly in-sync for every software rolled-out on every system. Upgrade per system if required by a freeze of the environment. Complete inventory of software installation and deployment states. Visibility of what is used where (licencing, cleanup of old S/W) Installation guidelines and documentation is available the same way for all softwares. !=> Bullet 1: Upfront decision on what to upgrade based on needs, EOS, risks, etc. !=> We want to upgrade the OS and all software products in one go, !=> enabling system software and applications to be tested and promoted in !=> production together. Bullet 2: avoid the mess with PROG members from the old set-up. Bullet 3: For every software product on an End-User system we want an SMP that is perfectly in-sync with the libraries that the End-User is looking at. In order to be immediately maintainable !=> Since we only keep one SMP environment for every software product !=> version, a product must be duplicated with a new name before applying !=> maintenance, software maintenance can then safely be applied to the new !=> copy. !=> SMPE centralised in one place. Only the Tlibs are distributed.

10 The new design, overview (1)
K-systems Production systems Development systems Test systems each system: &SYSR1 Active IPL-set Inactive IPL-set &SYSR2 6 model 9 vols 6 model 9 vols &SYSR3 &OEMR1 &OEMR2 &OEMR End-user systems )=> The new design is essentially a three part process, this slide shows the part facing the end-user: the IPL-set Contrary to the old situation there is no SMP visible on the end-user systems. All systems use a copy of the same standardized IPL-set, The installation system is no different in this respect from the end-user systems. An IPL-set becomes ‘the active IPL-set’ when a system is IPLed from the first device of the set. During normal operation there are no allocations on the inactive set (but obviously there will be allocations during a roll-out) Because of the difficulty obtaining permission from users to bring down groups of systems simultaneously we decided to give each sysplex (incl. monoplexes) it’s own IPL-set. If you recall, previously we had a number of systems (not necessary a sysplex) sharing IPL volumes.

11 The new design, the IPL-set
IPL-set is restored from a (virtual) tape that was created on the installation system. Each sysplex has two IPL-sets, the active set and the inactive set. First volume of an IPL-set is the load device for the end-user system. First volume restored from full volume dump to include the IPL text in the dump, the other volumes are dumped logically by dataset. Second volume contains an ICF user catalog for the VSAM datasets on the IPL-set. IPL-set is protected against updates by DFP OPEN exit IFG0EX0B, datasets can only be opened for INPUT. Bullet 1: IPL-set is restored from a tape that was created on the installation system. Bullet 2: Each sysplex has two IPL-sets, the active set and the inactive set. New software releases are activated in flip-flop manner. Bullet 3: First volume of an IPL-set is the load device for the end-user system. Bullet 4: First volume restored from full volume dump to include the IPL text in the dump, all other volumes are dumped logically by dataset. !=> This means one manipulation less on the End-User system during roll-out, one thing less that can go wrong. Bullet 5: Second volume contains an ICF user catalog for the VSAM datasets on the IPL-set. Bullet 6: IPL-set is protected against updates by DFP OPEN exit IFG0EX0B, datasets can only be opened for INPUT.

12 The new design, the IPL-set, continued
Datasets on the IPL-set are cataloged with symbolic volsers defined in IEASYMxx parmlib member with substrings from &SYSR1: SYSDEF SYMDEF(&SYSR2.='&SYSR1(1:5)2') SYMDEF(&SYSR3.='&SYSR1(1:5)3') SYMDEF(&OEMR1.='&SYSR1(1:5)5') SYMDEF(&OEMR2.='&SYSR1(1:5)6') SYMDEF(&OEMR3.='&SYSR1(1:5)7') When MVS IPL processing sets &SYSR1, the volser of the IPL device to "IPCAA1", then IEASYMxx sets &SYSR2. = "IPCAA2" &SYSR3. = "IPCAA3" &OEMR1. = "IPCAA5" &OEMR2. = "IPCAA6" &OEMR3. = "IPCAA7" Datasets on the IPL-set are cataloged with symbolic volsers. The symbols are defined in the IEASYMxx parmlib member using substrings from &SYSR1 These symbolic names are quite important throughout the software distribution process we will see further on. !=> The original idea was to concentrate the IBM software on the &SYSRx volumes and the non-IBM software on the &OEMRx volumes. !=> The reasons for the distinction between &SYSRx and &OEMRx have largely evaporated, but it was easier not to change the symbols. !=> The distinction stems from the different treatment in the old situation, before our big re-think.

13 The new design, overview (2)
K-systems Production systems Development systems Test systems Level-set, 6 model 9 volumes each system: &SYSR1 Active IPL-set Inactive IPL-set &SYSR2 6 model 9 vols 6 model 9 vols &SYSR3 &OEMR1 &OEMR2 &OEMR Installation system End-user systems )=> The left side of this slide depicts the installation system, the right side the end-user system. This slide tries to show we construct an exact image of the dasd volumes with software libraries that will be given to the end-user. We call this image the Level-Set. The red arrow shows we transport the image of the Level-set from installation system with dfDSS dump/restore on virtual tape. An important thing to notice: In this set-up things converge over time. If you do nothing in particular (to disturb the normal roll-out), more and more over time things will start to look the same in all systems. That is because there is a common origin: the installation system. So a number of configuration issues (in parmlib, proclib) can easily be handled the same way on all systems.

14 The new design, the Level-set
An exact image of the software for the end-user systems. Generated by the TDSL application on the software installation system. A ‘contents’ dataset is added with the list of all products and libraries on the LS. The non-VSAM datasets on the LS are not cataloged, VSAM datasets get a special temporary HLQ. Dsnames on the LS have no version-release qualifier (limited exceptions are allowed), Exists only on the software installation system. What is a Level Set Bullet 1: The Level-Set is an exact image of the software for the End-User system !=> Bullet 2: The LS is generated by TDSL, a specialized ISPF / DB2 application on the software installation system. The TDSL application can maintain several Level-sets in parallel. !=> Bullet 3: The generate process that makes the LS stores a list of all datasets in all products on the LS in a CONTENTS dataset. !=> a list of the datasets it copied to the LS. !=> The CONTENTS dataset is, like a package list. !=> For every dataset on the disk it gives the software product, the end-user dataset name, the REF dataset name !=> Since the CONTENTS dataset sits on the LS it will be dumped and restored onto all IPLsets as a permanent description of the contents. Bullet 4. The non-VSAM datasets on the LS are not cataloged, VSAM datasets get a special temporary High Level Qualifier.

15 The new design, overview (3)
K-systems REF-set, SMS storage pool Production systems REFx datasets Development systems Test systems TDSL Level-set, 6 model 9 volumes each system: &SYSR1 Active IPL-set Inactive IPL-set &SYSR2 6 model 9 vols 6 model 9 vols &SYSR3 &OEMR1 &OEMR2 &OEMR Installation system End-user systems This is the complete picture: )=> The TDSL application creates the Level-set disks from datasets in the REF-set. TDSL will copy only the target datsets from a set of chosen software products. The TDSL ISPF dialog creates JCL from data in SQL tables using ISPF file tailoring.

16 The new design, the REF-set
A large SMS-pool with dead-libraries All the REF-datasets created by software installation go in this pool ACS routines direct all allocations of REFx datasets to this pool All REF-dsnames contain a version-release qualifier Mostly vendor product libraries, but also Euroclear ‘config’ libraries Exists only on the central software installation system Bullet 1: The REF set is a large SMS pool with dead libraries created by software installation, (maybe SMP/E installed or non-SMP/E installed) These libraries are not actively in use by any instance of the product. !=> Bullet 2: All datasets created by software installation go in this pool (SMP CSI, target libs, dlibs, installation jcl, etc.) Bullet 3: SMS directs all allocations of REFx datasets to this pool Bullet 4: All the REF dsnames contain a version-release qualifier. !=> Datasets in the REF-set are logically grouped into TDSL products. !=> Bullet 5: Most off the REFx libraries are vendor product libraries, but there are also a number of Euroclear config libs like PARMLIB, PROCLIB, VTAMLST.. !=> In this context a product is a particular version and maintenance level of a commercial program product. The dsname indicates product, version and maintenance level.

17 The new design, RENAME K-systems REF-set, SMS storage pool Production systems REFx datasets Development systems Test systems TDSL Level-set, 6 model 9 volumes each system: &SYSR1 Active IPL-set Inactive IPL-set &SYSR2 6 model 9 vols 6 model 9 vols &SYSR3 &OEMR1 &OEMR2 &OEMR Installation system End-user systems )=> This part of the drawing shows the process to generate the Level-Set disks from the REF libraries (the red arrow) Since datasets have version-release qualifiers in the REF-set but not on the Level-Set a rename needs to be done ‘somewhere in the middle’ by the TDSL application. We have established a couple of simple rules to do the rename, next slide:

18 The new design, rename rules (1)
Dsnames in the REF-set need to be renamed when they are copied to the Level Set. Standard processing: rename removes the version release qualifier, i.e. REFN.T119A.OPS.LOAD REFN / SYSN dataset: SYSN.OPS.LOAD one qualifier dropped REF1.T113A.ZOS.LINKLIB REF1 / SYS1 dataset: SYS1.LINKLIB two qualifiers dropped )=> In the naming standard REFN corresponds to SYSN on the EU-systems, REF1 corresponds to SYS1 on the EU-systems. We use the REFN High LeveL Qualifier for everything that is not installed by z/OS serverpac. Important to observe: the standard rename process does not permit two releases of a product to coexist on the LS or IPLset.

19 The new design, rename rules (2)
The standard rename does not support two or more releases side by side. Keep-Qualifier processing: rename keeps the version release qualifier, i.e. REFN.T119A.OPS.LOAD SYSN.T119A.OPS.LOAD Keep-Qualifier processing is never done for SYS1 datasets: REF1.T113A.ZOS.LINKLIB SYS1.LINKLIB )=> We need the ability to run two or more releases of a software side by side (i.e. for migration or testing actions), so we need a way to have multiple versions and maintenance levels of one product on the Level Set. To this end the TDSL application can do a rename that preserves the version-release qualifier, so REFN.T119A.OPS.LOAD becomes SYSN.T119A.OPS.LOAD This option, called Keep-Qualifier processing, is not valid for REF1 / SYS1 datasets, so example REF1.T113A.ZOS.LINKLIB can never become SYS1.T113A.LINKLIB

20 Agenda Terminology The old situation The new design
The Product Catalog in TDSL Non-uniform software stacks TDSL Product Selection & Generate (build) processes The roll-out (restore) process VSAM challenges zFS on the installation system Let’s take a look at the TDSL application that does all of this.

21 The TDSL Product Catalog
TDSL application handles software deployment on the installation system. TDSL Product Catalog describes all products and the datasets belonging to each product in SQL tables. Each product (and version plus maintenance level) is identified by an 8-char acronym. Product Catalog has attributes at the level of the product and at the level of the dataset. Products can be duplicated in the Product Catalog to apply maintenance TDSL is an ISPF DB2 application that controls the part of the software deployment process that happens on the software installation system. It has a product catalog that contains a description of all program products and all the datasets belonging to each product in SQL tables. Each product (and version plus maintenance level) is identified by an 8-char acronym or identifier. This product identifier is the primary key in most of the SQL tables. The Product Catalog holds attributes both at the level of the product and at the level of the dataset. !=> The TDSL application is capable of duplicating a product in the Product Catalog, This duplicates the TDSL administration, the product libraries, any zFS the product has, the SMP/E zones …

22 The TDSL Product Catalog, screenshot
)=> This is a screenshot of the TDSL product catalog I made the screenshots from a 24x80 screen to render them readable in this presentation, in actual use this would obviously not be the preferred screen format.

23 The TDSL Product Catalog, screenshot
LR indicator Filter line provides a fast navigation facility More attributes on the right not in this screenshot )=> The screenshot shows how each product is defined with an 8 char acronym, i.e. COBOL34A is version 3.4, maintenance level A of the COBOL compiler. Product Catalog has a neat fast navigation tool in the form the overhead filter line, this enables you to filter on any product attribute. Most important attributes of a product are Product name Supplier, this is also an 8 char acronym Verbose description Where-Used flags, to the right, not shown in the screenshot Last modification date-time-userid Creation date-time-userid Last deployment LS and the date of last deployment this is usefull for tracking obsolete products There is no concept of product families, to the application COBOL34A different from COBOL34B just as CICS is from CA1. A list of datasets is logically attached to each product, if you type S in front of a product, you see the dataset list Each product is identified by an 8-char acronym

24 The TDSL Dataset List screenshot
)=> This is the list of datasets belonging to release 3.4, maintenance level A of the COBOL compiler. It’s not the most recent Cobol compiler, but it has a nice short list of datasets. As you see a number of attributes that are registered for each dataset. Let’s quickly go over the attributes.

25 The TDSL Product Catalog, dataset attributes
Dataset Type: INSTALL product installation jcl SMP the CSI clusters, SMPPTS, SMPMTS, SMPLOG, etc. DLIB SMP dlibs, 100% inert for TDSL TARGET SMP/E target library or other distributable dataset VTARGET VSAM target dataset UTARGET z/fs linear, always keeps version-release qualifier for automount NODIST not distributed, but it contributes to the PROGxx member APF dataset should be APF on end-user system LNK dataset should be link-listed on end-user system. !=> Initially we had enough with 4 dataset types, we’ve added some more with the arrival of ZFS and VSAM on the Level-Set. INSTALL is a product installation JCL library SMP the CSI clusters, SMPPTS, SMPMTS, SMPLTS, SMPLOG, and all the rest of SMP DLIB SMP dlibs, TDSL does nothing with these they are 100% inert TARGET SMP target library or other distributable dataset, i.e a PARMLIB PROCLIB or other config dataset VTARGET VSAM target dataset UTARGET z/fs aggregate, same as a VSAM but always keeps it’s version-release qualifier, intended for automount NODIST a dataset name that is not distributed, but it can contribute to the PROGxx member !=> The APF flag means the dataset should be APF on the End-User system, similarly an LNK flag means dataset should be in the linklist of the EU system !=> LNK is stored in two chars, to provide some control over order of datasets in the link list. !=> L1 will appear higher in the link list concatenation, L9 will appear lowest

26 The TDSL Product Catalog, dataset attributes, continued
DSTVOL: a symbolic volser like &SYSR1, &SYSR In the dataset list of the z/OS 1.13 A product: &SYSR … REF1.T113A.ZOS.NUCLEUS &SYSR1 REF1.T113A.ZOS.LINKLIB &SYSR1 &SYSR2 REF1.T113A.ZOS.IMW.SIMWSDCK &SYSR2 REF1.T113A.ZOS.OMEG.TKANMOD &SYSR … &SYSR3 Why fix the location of target datasets on the Level-set? - to ensure SYS1.NUCLEUS etc. on &SYSR1 volume &OEMR1 - fewer catalog updates on the end-user system )=> The last dataset attribute I need to explain is the destination volume or DSTVOL, this indicates onto which volume of the Level-Set a dataset will be copied during generate. It is an ampersand symbolic volume that will correspond to an actual volser on the End User system. By storing the DSTVOL in the dataset SQL table we fix the location of target datasets in the LS, why do we want to do this ? Some datasets need to be on a specific volume of the LS. i.e. SYS1.NUCLEUS needs to be on the first volume of the set. Also, fixing the location means fewer catalog updates will be required during roll-out on the EU system. The next slide shows a screenshot of the SYS1.NUCLEUS library in the dataset list of the ZOS113A product located on &SYSR1.

27 The TDSL product catalog, screenshot of z/OS 1.13 dataset list
)=> As you can see, the NUCLEUS dataset is defined with DSTVOL &SYSR1, so it will be copied to the first volume of the Level Set and subsequently dumped and restored to the first volume of the IPL-Set.

28 Agenda Terminology The old situation The new design
The Product Catalog in TDSL Non-uniform software stacks TDSL Product Selection & Generate (build) processes The roll-out (restore) process VSAM challenges zFS on the installation system The next topic illustrates what we mean by non-uniform software activation, why we need it and how we do it.

29 Non-uniform software stacks (1)
swap validation production systems test systems development production K-systems systems Systems using the same software stack: EB systems EF systems SSE systems K systems Installation system )=> Euroclear has acquired a number of companies that previously operated their own zOS, each with their own software stack. As a consequence, we now have to manage a number of systems with distinct software stacks. We grouped systems using the same software, shown here in different colors. i.e. the red ones have CICS & DB2, the green ones have DB2 but no CICS, the pinks don’t have either,

30 Non-uniform software stacks (2)
Why not deploy all software on all systems? Procurement dept. Licensing all software on all systems needlessly expensive. Audit dept. libraries in APF list for software products that will never be used. Why not deploy software per-system? Hard to maintain the multitude of parmlib members (i.e. PROGxx members) Many different combinations to validate before a system upgrade. A solution in-between is needed. All software will be present on all systems, but we make the libraries APF & linklisted only on those systems where it is licensed. To do this we add a where-used attribute to each product in the product catalog. Why not activate all software everywhere? This is the carpet approach The Procurement people object to activating all software on all systems, as licensing all software on all systems would incur needless costs. Auditors objects to libraries in APF list for software products that will never be used on that particular system. Why not activate software on a per-system basis? Hard to maintain the multitude of parmlib members (i.e. PROGxx members) Many different combinations to validate before a system upgrade. A solution in-between is needed. All software will be present on all systems, but we make the libraries APF & linklisted only on those systems where it is licensed. To do this we attach a where-used attribute to each product in the product catalog.

31 Non-uniform software stacks (3)
7 flags attributed to each product, each flag is on (used in this region) or off (not used in region) SSE PROGEG EF PROGEF the libraries of product CORT660A will be included in EP PROGEP PROGEG, PROGEF, PROGEP, PROGEN and PROGEB ENL PROGEN but not in PROGEK or PROGES K PROGEK BSOF PROGEB SY90 PROGES )=> The where-used attribute that is attached to every product logically consists of 7 flags that are either on or off. ON means the product is used in this ‘REGION’ OFF means it is not used. A PROG member is generated for each region. Indicated by the red arrow. The libraries of products that are not used in a particular region will not appear in the PROG member of that region.

32 Creating PROG members Why create PROG members with a program? Software installers know what datasets need to be APF or Link-listed, it’s in the product installation instructions -> can be entered in TDSL. We keep lists of datasets we don’t distribute, but that need to be in the PROG member. TDSL has al the info it needs to create APF lists and Link lists for the End-User systems. Perhaps you are wondering why we would create PROG members with a program. The software installer knows what datasets need to be APF or link-listed since it’s in his installation instructions, so he can enter this info in TDSL. In addition, we also have lists of datasets that exist only on the End-User system !=> Creating the PROG members automatically at a central location results in fewer errors than doing it manually and locally on every end-user system. !=> All systems of one “region” have the same APF & Linklist. !=> A lot of potential for mistakes is eliminated. !=> Much less effort is required to maintain it.

33 Creating PROG members (2)
)=> In this screenshot of the dataset list belonging to the Cortex product, you can see that the software installer marked the Cortex linklib as APF and Link-listed. The L9 value requests it to be placed near the end of the link list concat.

34 Creating PROG members (3)
)=> When you look at the PROG member you can see the Cortex linklib at the end of the link-list As you see the LNKLST entries in the PROG member do not specify a volume, they point to datasets cataloged in the master catalog. At current all SYSN datasets are cataloged in the master catalog, just like SYS1 If ever the need would arise, we could make a user catalog for SYSN datasets and generate a PROG member with a symbolic volser. It’s all just data sitting in an SQL table being formatted in a particular way.

35 Agenda Terminology The old situation The new design
The Product Catalog in TDSL Non-uniform software stacks TDSL Product Selection & Generate (build) processes The roll-out (restore) process VSAM challenges zFS on the installation system Here’s a quick overview of the GENERATE (or build) process

36 TDSL Generate steps Choose software products to go onto a Level-Set.
An SQL query finds all unique DSTVOLs Engineer specifies the ‘real’ volsers and device numbers for each symbolic DSTVOLs, i.e. &SYSR1 will be build on real volser L13A11 The dialog produces jcl to build the LS with the selected software products. Only the TARGET datasets will be copied to the LS Bullet 1. An engineer can choose software products to go onto a Level-Set, much like writing a shopping list. In the TDSL database the product selection logically consists of a group of these product acronyms you saw when I described the product catalog. Bullet 2. As soon as the product selection is fixed, an SQL query finds all unique DSTVOLs that are being referenced by the included products. Typically there are 6 volumes: &SYSR1, &SYSR2, &SYSR3, &OEMR1, &OEMR2, &OEMR3 Bullet 3 The dialog the asks the engineer to specify the ‘real’ volsers and device numbers for each of the symbolic DSTVOLs, i.e. &SYSR1 will be build on volser L13A11

37 The TDSL Generate, screenshot
This is a screenshot of the old GENERATE dialog.

38 The treatment of link list datasets, space allocation
The problem: ALL REF-Set libraries have secondary space so SMP APPLY won’t fail with abend x37. On the Level-Set, Link list libraries should have zero secondary allocation, to prevent an extent being added when someone adds/updates a member. The solution: DFDSS dataset copy will copy every library into a single extent. Copy with ALLDATA & ALLEXCP will preserve the allocated space. A homegrown utility clears the secondary allocation value in the VTOC of the Level-Set. Datasets that will appear in the link list require special attention: In the REF-Set we want all libraries to have secondary allocation space so SMP APPLY won’t fail with an x37 abend. On the Level-Set and the IPL-Set however, the Link list libraries should be in a single extend with zero secondary allocation, in order to prevent an extent being added when someone adds or replaces a member. !=> Health Check CSV_LNKLST_SPACE verifies this. How do deliver this? DFDSS dataset copy will copy every library into a single extent. Copy with ALLDATA & ALLEXCP will preserve the allocated space. !=> A homegrown utility clears the secondary allocation value in the VTOC of the Level-Set using the CVAF macro’s.

39 Agenda Terminology The old situation The new design
The Product Catalog in TDSL Non-uniform software stacks TDSL Product Selection & Generate (build) processes The roll-out (restore) process VSAM challenges zFS on the installation system Let’s take a look at the roll-out on the End-User system.

40 Roll-out on the end-user system
Each sysplex has an active and inactive IPL-set with associated master catalogs: ACTIVE ACTIVE INACTIVE INACTIVE MASTER CATALOG IPL-SET IPL-SET MASTER CATALOG IPCAA1 IPCAB CATALOG.PLXP.IPCAA1.MASTER CATALOG.PLXP.IPCAB1.MASTER Inactive IPL-set is formatted, ICKDSF INIT New master catalog is delete/defined A dump is restored onto the inactive IPL-set REPRO NOMREGECAT of the active master catalog to the new one Inactive master catalog is fixed for new- or changed datasets IPL from the inactive IPL-set. Here’s how we do roll-out or restore on the End-User system: Each sysplex has an active and inactive IPL-set with associated master catalogs Typically on a Friday afternoon, the disks of the inactive IPL-set are formatted (ICKDSF INIT) A master catalog of the inactive set is delete/defined, so it’s completely blank A dump of the Level-Set datasets is restored onto the disks of the inactive IPL-set The active master catalog is copied with REPRO NOMREGECAT into the new one A rexx procedure generates IDCAMS commands to fix the new master catalog for all datasets that are new or moved to a different volume of the IPL-set

41 This slide just for the notes
In the weekend we IPL from the (up to now) inactive IPL-set. Notice that there is a time span between the REPRO NOMERGECAT, and the IPL from the new resident that the back-pointers for VSAM datasets that are cataloged in the mastercat are inconsistent. In our case that is from Friday afternoon until the Saturday afternoon IPL-slot.

42 Agenda Terminology The old situation The new design
The Product Catalog in TDSL Non-uniform software stacks TDSL Product Selection & Generate (build) processes The roll-out (restore) process VSAM challenges zFS on the installation system The distribution of VSAM datasets requires special attention, let’s look into that.

43 The Challenges of VSAM datasets
VSAM datasets need to be cataloged for all processing. Interferece with the driving system when copying VSAM datasets w same name. All IDCAMS processing requires a correct VSAM ‘back-pointer’ Catalog volume Volume with VSAM dataset forward pointer ICF catalog VVDS SYSV.OMVS.ROOT.ZFS VVR back pointer VSAM dataset SYSV.OMVS.ROOT.ZFS )=> VSAM datasets need to be cataloged for all processing. If the driving system is using the same VSAM cluster names as the ones we try to copy to the LS, we will obviously interfere with the driving system. On the IPL-set the VSAM ‘back-pointer’ must be correct for all IDCAMS processing, We need to pay attention when multiple systems, each with their own master catalog are sharing (R/O) the same cluster, that situation occurs when two systems each with their own master catalog share an IPL-set.

44 The distribution of VSAM datasets (1)
On the installation system: Define a new ICF usercatalog on one of the LS volumes Define an alias for a new temporary HLQ with RELATE to the new usercat, i.e. SYSDEF Copy the VSAM datasets to the LS with RENAMEUNCONDITIONAL, REFx  SYSDEF Level-set is dumped to tape. On the end-user system: Define a new ICF usercatalog on one of the IPLset volumes Delete/define the SYSDEF alias Restore VSAM clusters with the SYSDEF HLQ from tape (no rename) IDCAMS ALTER all SYSDEF dsnames in the user catalog to the normal HLQ for VSAM This is how we distribute VSAM datasets, keeping them cataloged at all times while not disturbing the driving system: During the generate process on the installation system, we define a new ICF usercatalog on one of the LS volumes A new alias is defined for a new temporary HLQ with RELATE to the new usercat, i.e. SYSDEF The VSAM datasets we need to distribute are copied to the LS with RENAMEUNCONDITIONAL, REFx  SYSDEF This will cause the copies to be cataloged in the new usercatalog The Level-set is then dumped to tape. When we do the roll-out on the End-User system, we define a new blank usercatalog and we set-up the SYSDEF alias to relate to the new catalog. A standard dfdss dataset restore will restore all datasets from tape and it will catalog the vsam clusters in the new empty user catalog. Next, the VSAM clusters are renamed with an IDCAMS ALTER job to rename from the temporary HLQ into the real intended HLQ. !=> The ALTER does not make the ICF catalog entry go to a different catalog and is not blocked by any enqueue.

45 The distribution of VSAM datasets (2)
An alias with symbolic relate accomplishes the same ‘volume switching function’ as the symbolic volumes in the catalog do for non-vsam datasets: DEFINE USERCATALOG – (NAME(CATALOG.SYSV.IPCAA1) – IPL-set A ICFCATALOG – VOLUME(IPCAA2) – CYLINDERS(1 1) DEFINE USERCATALOG (NAME(CATALOG.SYSV.IPCAB1) – IPL-set B ICFCATALOG – VOLUME(IPCAB2) – CYLINDERS(1 1) DEFINE ALIAS(NAME(SYSV) – SYMBOLICRELATE(CATALOG.SYSV.&SYSR1)) )=> The previous slide shows how the VSAM datasets end-up getting cataloged in a new catalog of the inactive IPL-set during the roll-out process. This slide shows the method that we use to make the alias for the distributed VSAM clusters automatically point to the user catalog belonging to the active IPL-set. The ICF catalogs for the active and inactive IPL-set are defined with the volser of the IPL-volume in the catalog name. The ALIAS for the VSAM HLQ, in our case SYSV, is defined with a symbolic relate parameter that contains an &SYSR1 symbol. The ALIAS will always relate to the correct catalog for the currently active IPL-set without needing any maintenance when doing IPL from the other IPL-set. In fact, the alias with symbolic relate accomplishes the same ‘volume switching function’ as the symbolic volumes in the catalog do for non-vsam datasets. It does this for all VSAM datasets, not just ZFS. (i.e. the DSQPNLE VSAM cluster of DB2 QMF)

46 The distribution of VSAM datasets (3)
LISTCAT ENTRY(SYSV) ALIAS ALL ALIAS SYSV IN-CAT --- CATALOG.PLXP.IPCAA1.MASTER HISTORY RELEASE ASSOCIATIONS SYMBOLIC-CATALOG.SYSV.&SYSR RESOLVED-CATALOG.SYSV.IPCAA1 )=> This slide shows how the alias appears in a LISTCAT, you can see how the symbolic RELATE value gets resolved.

47 Agenda Terminology The old situation The new design
The Product Catalog in TDSL Non-uniform software stacks TDSL Product Selection & Generate (build) processes The roll-out (restore) process VSAM challenges zFS on the installation system The last topic I’d like to cover is how we handle zFS file systems on the installation system.

48 z/FS file systems on the installation system (1)
Risks of APPLY on statically mounted file systems: If the wrong file system is mounted, an APPLY job could update USS components belonging to a different SMP/E zone than the OS libraries. Potentially there could be data not hardened in the file system when a R/W mounted file system is copied with dfDSS. How automounted file systems can help: USS path in dddef’s can contain a zFS qualifier that makes automount mount the file system corresponding to that zone. Automount will unmount file systems that have not been accessed for some amount of time. This reduces the time window during which uncommitted data can exist. When you need to APPLY a fix that will impact USS components obviously the relevant file system needs to be mounted. Statically mounting file systems brings some risks with it. These are some of the risks identified: Bullet 1: If the wrong file system is mounted, an SMP APPLY job could update USS components belonging to a different SMP/E zone than the OS libraries, !=> So you actually corrupt two zones at once. !=> The risks is made worse because !=> the person doing the mount is not always the same as the guy who’s running the APPLY !=> An IPL intervening between mount and APPLY will revert all mounts back to how they were defined in the BPXPRMxx parmlib member !=> And potentially a long time could elapse between the file system mount and the APPLY.

49 z/FS file systems on the installation system (2)
/etc/auto.master: /SERVICE/DB2/DB2BASE /etc/service_db2_db2base.map /etc/service_db2_db2base.map name * type ZFS filesystem REFN.<uc_name>.DB2.DB2BASE.ZFS mode rdwr duration delay security Yes setuid no DDDEF SDSNABIN in target zone TD1010A contains USS path: '/SERVICE/DB2/DB2BASE/T10101A/bin/IBM/' )=> Here’s how it works: The automount control file /etc/auto.master points to the map file to be used for a particular directory, In the example this is directory /SERVICE/DB2/DB2BASE The map file contains a filesystem line with the pattern name of the ZFS aggregate dataset name. The SMP DDDEF is defined with a path that contains a the qualifier of the ZFS aggregate.

50 z/FS file systems on the installation system (3)
The SMP/E APPLY job: //SMPCNTL DD * SET BDY(TD1010A) APPLY SELECT(…) . When SMP/E opens a file in the SDSNABIN directory it will cause the automount daemon to mount REFN.T1010A.DB2.DB2BASE.ZFS onto mount point /SERVICE/DB2/DB2BASE/T10101A Automount replaces <uc_name> in the map file with the path-qualifier from the SMP/E dddef path to obtain the zFS dsname: i.e. Filesystem REFN.<uc_name>.DB2.DB2BASE.ZFS becomes REFN.T1010A.DB2.DB2BASE.ZFS When the SMP APPLY step opens a file in the SDSNABIN directory the automount daemon will see this. It will determine the name of the ZFS aggregate that needs to be mounted from the pattern in the map file. It takes the filesystem string from the map file and replaces the <uc_name> with the uppercased directory name, T10101A in the example, to get the name of the ZFS aggregate. !=> With the map file as shown on the previous slide, the ZFS will remain mounted for 60 minutes after the last access. !=> The set-up may seem complicated, but it offers the huge advantage of never mixing USS and OS components in an APPLY step. !=> A mix-up is made impossible because each DDDEF contains the qualifier of the ZFS linear. !=> When you work with multiple target zones attached to one global this eliminates a big source of problems. !=> The only drawback we found, is that some unix scripts in some software installation procedures insist on particular path names, leaving no room for the ‘ZFS qualifier’ directory.

51 z/FS file systems on the installation system (4)
A Loose End How to distribute large zFS file-systems that need SMS extended format data class attribute? An SMS managed volume in the IPL-set is most unpractical. Regarding the distribution of ZFS this is one of the unsolved problems: How to distribute large zFS file-systems that need SMS extended format data class attribute? An SMS managed volume in the IPL-set is most unpractical.

52 The End Question Time. Loose ends: how to distribute large zFS filesystems that need SMS extended format data class attribute ? An SMS managed volume in the IPL-set is not very practical. Show explain a sample of a CONTENTS dataset If intrest in the audience and time permits, and a RAS connection can be established, a short tour of the TDSL application.

53 Reporting on changes in the field
At levelset generation time a hash is created for all datasets. Reporting is available to identify all changes between the levelset and a resident set deployed in a user system.


Download ppt "IBM System z Technical University – Berlin, Germany– May 21-25"

Similar presentations


Ads by Google