Download presentation
Presentation is loading. Please wait.
Published byBasil Knight Modified over 7 years ago
1
Dažas lieldatoru OS IBM : OS/360 -> OS370 ->MVS -> OS/390 -> z/OS, VM (no šīs sērijas nāk nosaukums “Big Blue”, šī ir daudz gan zagta, gan “noskatīta”) EC(PSRS): praktiski 100% no IBM zagtas mašīnas ICL (angļu): OS līdzīga IBM VM HP (agrāk TANDEM): PATHWAY – pilnīgi savādāka OS SIEMENS: BS2000, tagad kopā ar FUJITSU OS/390 is an integrated enterprise server operating system. It incorporates into one product a leading-edge and open communications server, distributed data and file services, Parallel Sysplex system support, object-oriented programming, distributed computer environment (DCE), and open application interface. As such, it is uniquely suited to integrate today.s heterogeneous and multi-vendor environments. By incorporating the base operating system, it continues to build on the classic strengths of MVS: its reliability, continuous availability features, and security. This provides a scalable system that supports massive transaction volumes and large numbers of users with high performance, as well as advanced system and network management, security, and 24/7 availability.
2
Introduction to OS/390 OS/390 is an integrated enterprise server operating system. It incorporates into one product a leading-edge and open communications server, distributed data and file services, Parallel Sysplex system support, object-oriented programming, distributed computer environment (DCE), and open application interface. As such, it is uniquely suited to integrate today.s heterogeneous and multi-vendor environments. By incorporating the base operating system, it continues to build on the classic strengths of MVS: its reliability, continuous availability features, and security. This provides a scalable system that supports massive transaction volumes and large numbers of users with high performance, as well as advanced system and network management, security, and 24/7 availability.
3
Content Hardware overview System components overview
Storage and Program management Datasets, Catalogs and etc.
4
OS/390 software and hardware
The base OS/390 operating systems executes in a processor and resides in the processor storage during execution. The OS/390 operating system is commonly referred to as the system software. The hardware consists of the processors and other devices such as a direct access storage device (DASD), tape, and consoles. Tape and DASD are used for system functions and by user programs that execute in an OS/390 environment. When you order OS/390, you receive your order on tape cartridges. When you install the system from tape, the system code is then stored on DASD volumes. Once the system is customized and ready for operation, system consoles are required to start and operate the OS/390 system. Not shown at this time in the visual are the control units that connect the CPU (processor) to the other tape, DASD, and console devices. The main concepts shown here are: Software The OS/390 operating system consists of load modules and is often called executable code. These load modules are placed onto DASD volumes into load libraries during a system install process. Hardware The system hardware consists of all the devices, controllers, and processors that make up an OS/390 complex. Devices Shown in the visual are the tape, DASD, and console devices. There are many other types of devices that will be discussed later in this document. Storage Central storage, often called real or main storage, is where the OS/390 operating system executes. Also, all user programs share the storage of the processor with the operating system.
5
Konsole Agrākajās sistēmas versijās tā bija rakstāmmašīna. Ar to strādāja mašīnas operatori. Programmētāji un lietotāji ievadīja programmas un datus ar perfokartēm, izvadu saņēma izdrukātu uz listingiem. Tas izskaidro daudzus konceptus šai sistēmā, kas dzīvi vēl šodien (piemēram, rindiņu numurus programmu tekstos katras rindiņas 1-6.pozīcijā) Vēlāk parādījās displejs ar klaviatūru (piemēram, termināls 3270). Tas ir burtu-ciparu displejs bez grafikas, tipiski 24 rindiņas pa 80 zīmēm.
6
OS/390 evolution
7
OS/390 elements and features
The OS/390 system consists of base elements that deliver essential operating functions. In addition to the services provided by MVS/ESA, this means such functions as communications support, online access, host graphics, and online viewing of publications. In addition to the base, OS/390 has optional features that are closely related to the base features. There are two types of optional features: · One type of feature is always shipped with the OS/390 system whether they are ordered or not. These features support dynamic enablement, which allows you to dynamically enable and disable them. If such a feature is ordered, it is shipped enabled for use. If such a feature is not ordered, it is shipped disabled. It can later be enabled. · A second type of optional feature is not shipped automatically. These features must be ordered specifically. The idea of the OS/390 system is to have elements and features instead of program products. This concept might be more easily explained by saying that OS/390 consists of a collection of functions that are called base elements and optional elements. The optional elements (features) are either integrated or nonintegrated. It is important to note that these optional features, both integrated and nonintegrated, are also tested as part of the integration of the entire system. The intention of this visual is to explain the difference between these terms. It is not the intention to discuss which products are included in OS/390 and which are not. - Shipped as part of the OS/390 system is the base operating system and the products/features that were part of MVS/ESA SP V5.2.2, for example, UNIX System Services, SOMobjects, and LAN services. In addition to these features, products such as VTAM, TSO/E, ISPF, GDDM ., and BookManager READ/MVS, which provide essential operating system functions are included in the base and called base elements. The list on the visual is not a complete list. More details will be provided later in the course. Some of the base elements can be dynamically enabled and disabled, for example TCP/IP and DFSMS/NFS. The reason for this is that a customer may choose to use a vendor product for TCP/IP and NFS instead of IBM.s products. - In addition to the OS/390 base, there is a set of optional features. Note that there are two types of optional features: one type is always shipped, and the other must be specifically ordered. - The features that support dynamic enablement are always shipped. Examples are JES3, DFSMSdss, and DFSMShsm. If these features are ordered as part of the OS/390 system order, they will be shipped as enabled in the system. If they are not ordered, they are shipped as disabled. Later on, you can use them by letting IBM know and by dynamically enabling them through a SYS1.PARMLIB member. - The other type of features are the optional features equivalent to optional program products. Examples are RACF, RMF, C/C++ compiler, and so on. Some of the optional products will still be available as separate orderable products for customers that are using MVS/ESA. However, it is IBM.s intention to provide new functions only within the OS/390 elements and features. Future releases of the OS/390 system will contain more elements and features as more program products are included in the solution. Additional Information — There are two classifications of elements in the OS/390 system: exclusive and nonexclusive. · Exclusive elements: The functional level of an element or feature that can be ordered only as part of the OS/390 package, and is not available as an independent element or feature anywhere else. · Nonexclusive elements: Those elements or features included in the OS/390 package that are also orderable as independent products, at the same functional level, from the MVS product set.
8
OS/390 contents OS/390 is based upon the MVS/ESA SP V5.2.2 product, and the latest versions of associated products. The OS/390 system provides solutions for the following major areas: · LAN Services: Provides support for S/390 to be data and print servers in a Local Area Network (LAN) environment, as well as a focal point for LAN administration enabling LAN workstation users to store and share data and applications in a central location on the S/390. · Distributed Computing: Support for distributed applications using industry solutions such as Distributed Computing Environment (DCE), Distributed File System (DFS), and Network File System (NFS). · eNetwork Communications Server: (Also known as CS for OS/390 and SecureWay Communications Server) provides connectivity to a broad set of users and vendor platforms opening OS/390 for networking applications (called SecureWay Communications Server with Release 8). · System Services: Provide the classic strengths of MVS, rock-solid reliability and availability, support for high-transaction workloads and many users with optimum performance. · Systems Management: Provide a window to enterprise-wide systems management. · Application Enablement: Support for the new object technology and rapid development of applications, improving time-to-market for new business function. · UNIX System Services: Support for open standards such as Posix and XPG4.2 provides opportunities for more applications on the S/390 platform. · Softcopy Services: Improves productivity in systems installation and management. · Network Computing Services: Supports Secure Access to the Internet with Domino Go Webserver. · NetQuestion: Provides a powerful, full-text indexing and search server. It supports high-speed searching of OS/390 Web sites, as well as documents stored on the OS/390 server. Purpose — Introduce the contents and solutions provided by the OS/390 system. Details — The OS/390 system is based upon the MVS/ESA SP V5.2.2 system and associated products. However, OS/390 contains functional changes in many of the base elements and features that are exclusive to OS/390. The Installation and Planning for Migration units present information on how to install and migrate to OS/390. Use this visual to show that the OS/390 system can be looked upon as an umbrella solution for all the operating system functions in a S/390 environment. It supports several important industry standards: CORBA for objects, OSF for open distributed applications, and XPG4.2 for open systems support. It is a complete server system providing solutions for new technologies such as objects, client/server, and LAN just to mention a few. Each of these areas will be briefly presented on the following visuals. The text shows an overview of the OS/390 system as well as the structure of this course. This course will also include some information on how to implement the server solutions. Transition Statement — Before proceeding with an overview of each of the server areas, the next two visuals define the base elements and features that make up OS/390.
9
Data Facility Storage Management Subsystem (DFSMS)
DFSMS/MVS and MVS comprise the base MVS operating system, where DFSMS/MVS performs the essential data, storage, program, and device management functions of the system. DFSMS/MVS is the central component of both system-managed and non-system-managed storage environments. DFSMS/MVS, MVS, and ESA/370 or ESA/390 hardware exploit the usability and function available with MVS. MVS supports both 24-bit and 31-bit addressing used by components of DFSMS/MVS. Many DFSMS/MVS components have modules or data in extended virtual storage above 16 MB, leaving more space below the 16 MB line for user applications. The DFSMS environment consists of a set of IBM hardware and software products that together provide a system-managed storage solution for MVS installations. DFSMS/MVS is an integral part of this environment. The components of DFSMS/MVS automate and centralize storage management based on installation-defined policies for availability, performance, space, and security. The Interactive Storage Management Facility (ISMF) provides the user interface for defining and maintaining these policies and the Storage Management Subsystem (SMS) governs these policies for the system. In this environment, the Resource Access Control Facility (RACF) and Data Facility Sort (DFSORT) complement the functions of the base operating system; RACF provides resource security functions, and DFSORT adds the capability for faster and more efficient sorting, merging, copying, reporting and analyzing of business information.
10
Components of OS/390 security
The OS/390 security server consists of these components: · OS/390 Distributed Computing Environment (DCE) security server DCE Security Server provides user and server authentication for applications using the client-server communications technology contained in the Distributed Computing Environment for OS/390. Beginning with OS/390 Security Server Version 2 Release 5, the DCE Security Server can also interoperate with users and servers that make use of the Kerberos V5 technology developed at the Massachusetts Institute of Technology and can provide authentication based on Kerberos tickets. Through integration with RACF, OS/390 DCE support allows RACF-authenticated OS/390 users to access DCE-based resources and application servers without having to further authenticate themselves to DCE. In addition, DCE application servers can, if needed, convert a DCE-authenticated user identity into an RACF identity and then access OS/390 resources on behalf of that user, with full RACF access control. · OS/390 Firewall Technologies Implemented partly in the Security Server and partly in the SecureWay Communications Server for OS/390, OS/390 Firewall Technologies provide basic firewall capabilities on the OS/390 platform to reduce or eliminate the need for non-OS/390 platform firewalls in many customer installations. The Communications Server provides the firewall functions of IP packet filtering, IP security (VPN or tunnels), and Network Address Translation (NAT). The Security Server provides the firewall functions of FTP proxy support SOCKS daemon support, logging, configuration, and administration. · OS/390 Lightweight Directory Access Protocol (LDAP) server LDAP Server provides secure access from applications and systems on the network to directory information held on OS/390 using the Lightweight Directory Access Protocol. · Resource Access Control Facility (RACF) The primary component of the SecureWay Security Server for OS/390 is the Resource Access Control Facility, which works closely with OS/390 to protect its vital resources. Building from a strong security base provided by the RACF component, the Security Server is able to incorporate additional components that aid in securing your system as you make your business data and applications accessible by your intranet, extranets, or the Internet.
11
Попытка несанкционированного доступа
RACF Functions OPERATIONS Пользователи БД RACF Ресурсы Система +RACF AUDITOR SPECIAL Идентификация и Аутентификация Проверка полномочий Регистрация событий и отчеты Администрирование Доступ Событие доступа MVS наборы данных CICS транзакции Системные команды МЛ SMF Отчеты Слежение за событиями в системе Администрирование защиты Профили Попытка несанкционированного доступа Запрос на доступ LOGON
12
RACF - svarīgākais Ja saņemat kļūdu ziņojumu, kur kaut kas teikts par RACF, tad droši vien jūs mēģināt darīt kaut ko, kas jums nav atļauts. Vai nu Jums nepietiek tiesību (un jālūdz palīdzība administratoram) vai arī Jūs tiešām mēģināt darīt kaut ko, kas Jums nebūtu jādara (apzināti vai nē)
13
Resource Management Facility (RMF)
Many different activities are required to keep your OS/390 running smoothly, and to provide the best service on the basis of the available resources and workload requirements. The console operator, the service administrator, the system programmer, or the performance analyst will do these tasks. RMF is the tool that helps each of these people do the job effectively. RMF gathers data using three monitors: · Short-term data collection with Monitor III · Snapshot monitoring with Monitor II · Long-term data gathering with Monitor I Data is gathered for a specific cycle time, and consolidated data record are written at a specific interval time. The default value for data gathering is one second and for data recording 30 minutes. You can select these options according to your requirements and change them whenever the need arises. Monitor I collects long-term data about system workload and resource utilization, and covers all hardware and software components of your system: processor, I/O device and storage activities and utilization, as well as resource consumption, activity and performance of groups of address spaces.
14
System Management Facility (SMF)
System management facilities (SMF) collects and records system and job-related information that your installation can use in: · Billing users. · Reporting reliability. · Analyzing the configuration. · Scheduling jobs. · Summarizing direct access volume activity. · Evaluating data set activity. · Profiling system resource use. · Maintaining system security. SMF formats the information that it gathers into system-related records (or job-related records). System-related SMF records include information about the configuration, paging activity, and workload Job-related records include information on the CPU time, SYSOUT activity, and data set activity of each job step, job, APPC/MVS transaction program, and TSO/E session. An installation can provide its own routines as part of SMF. These routines will receive control either at a particular point as a job moves through the system, or when a specific event occurs. For example, an installation-written routine can receive control when the CPU time limit for a job expires or when an initiator selects the job for processing. The routine can collect additional information, or enforce installation standards.
15
Workload Manager (WLM)
Before the introduction of MVS Workload Manager, MVS required you to translate your data processing goals from high-level objectives about what work needs to be done into the extremely technical terms that the system can understand. This translation requires high skill-level staff, and can be protracted, error-prone, and eventually in conflict with the original business goals. Multisystem, sysplex, parallel processing, and data sharing environments add to the complexity. MVS Workload Manager provides a solution for managing workload distribution, workload balancing, and distributing resources to competing workloads. MVS Workload Manager is the combined cooperation of various subsystems (CICS, IMS/ESA, JES, APPC, TSO/E, OS/390 UNIX System Services, DDF, DB2, SOM, LSFM, and Internet Connection Server) with the MVS Workload Manager (WLM) component.
16
Virtual Lookaside Facility (VLF)
Virtual lookaside facility (VLF) is a set of services that can improve the response time of applications that must retrieve a set of data for many users. VLF creates and manages a data space to store an application.s most frequently used data. When the application makes a request for data, VLF checks its data space to see if the data is there. If the data is present, VLF can rapidly retrieve it without requesting I/O to DASD. To take advantage of VLF, an application must identify the data it needs to perform its task. The data is known as a data object. Data objects should be small to moderate in size, named according to the VLF naming convention, and associated with an installation-defined class of data objects. Certain IBM products or components such as LLA, TSO/E, CAS, and RACF use VLF as an alternate way to access data. Since VLF uses virtual storage for its data spaces, there are performance considerations each installation must weigh when planning for the resources required by VLF. Note: VLF is intended for use with major applications. Because VLF runs as a started task that the operator can stop or cancel, it cannot take the place of any existing means of accessing data on DASD. Any application that uses VLF must also be able to run without it. Library lookaside (LLA) is a started task system address space that improves the system.s performance by reducing the contention for disk volumes, the searching of library directories, and the loading of programs. Directory entries for the primary system library, SYS1.LINKLIB, program libraries concatenated to it in SYS1.PARMLIB(LNKLSTxx), and additional production libraries named in SYS1.PARMLIB(CSVLLAxx) are read into the private area of the LLA address space during its initialization. Subsequent searches for programs in these libraries will begin with the directories in LLA, and not in the data sets on DASD. The most active modules from LLA-managed libraries are staged into the DCSVLLA data space managed by VLF. You will obtain the most benefit from LLA when you have both LLA and VLF functioning. You should plan to use both.
17
UNIX System Services Beginning with OS/390 Release 3, UNIX System Services has been merged with the BCP, and is now part of the BCP FMID. In addition, the OMVS address space is started automatically. BPXOINIT is the started procedure that runs the initialization process. The OMVS address space is now started automatically at IPL by means of the OMVS= statement in the IEASYSxx parmlib member. OS/390 UNIX interacts with the following elements and features of OS/390: · C/C++ Compiler, to compile programs · Language Environment, to execute the shell and utilities or any other · XPG4-compliant shell application · Data Facility Storage Management Subsystem/MVS (DFSMS/MVS*) · OS/390 Security Server · Resource Measurement Facility (RMF) · System Display and Search Facility (SDSF) · Time Sharing Option Extensions (TSO/E) · eNetwork Communications Server - TCP/IP Services (called SecureWay Communications Server with Release 8) · ISPF, to use the dialogs for OEDIT, or ISPF/PDF for the ISPF shell · BookManager* READ/MVS, to use the OHELP online help facility
18
Language Environment (LE)
Today, enterprises need efficient, consistent, and less complex ways to develop quality applications and to maintain their existing inventory of applications. The trend in application development is to modularize and share code. Language Environment gives you a common environment for all Language Environment-conforming high-level language (HLL) products. An HLL is a programming language above the level of assembler language and below that of program generators and query languages. In the past, programming languages also have had limited ability to call each other and behave consistently across different operating systems. This has constrained those who wanted to use several languages in an application. Programming languages have had different rules for implementing data structures and condition handling, and for interfacing with system services and library routines. LE components The visual shows the separate components that make up Language Environment. Language Environment consists of: · Basic routines that support starting and stopping programs, allocating storage, communicating with programs written in different languages, and indicating and handling conditions · Common library services, such as math services and date and time services, that are commonly needed by programs running on the system. These functions are supported through a library of callable services. · Language-specific portions of the run-time library, because many language-specific routines call Language Environment services. However, behavior is consistent across languages. POSIX support is provided in the Language Environment base and in the C language-specific library.
19
LE’s common run-time environment
The graphic illustrates the common environment that Language Environment creates. It also shows that each HLL has its specific run-time and SYSLIB libraries, and shares with others HLLs a Common Execution Library (CEL). The graphic further shows that the load modules produced in such way can be executed in different operating environments under OS/390 or VM/ESA. Using Language Environment Language Environment helps you create mixed-language applications and gives you a consistent method of accessing common, frequently used services. Building mixed-language applications is easier with Language Environment-conforming routines because Language Environment establishes a consistent environment for all languages in the application. Language Environment provides the base for future IBM language library enhancements in the OS/390 and VM environments. Many system dependencies have been removed from Language Environment-conforming language products. Because Language Environment provides a common library, with services that you can call through a common callable interface, the behavior of your applications will be easier to predict. Language Environment.s common library includes common services such as messages, date and time functions, math functions, application utilities, system services, and subsystem support. The language-specific portions of Language Environment provide language interfaces and specific services that are supported for each individual language. Language Environment is accessed through defined common calling conventions.
20
Time Sharing Option/Extended (TSO/E)
TSO/E is a base element of OS/390. TSO/E allows users to interactively share computer time and resources. In general, TSO/E makes it easier for people with all levels of experience to interact with the MVS system. Before OS/390, TSO Extensions (TSO/E) was a licensed program for the MVS and MVS/ESA System Products, and it was an extension of the Time Sharing Option (TSO) of former MVS systems. TSO/E has advantages for a wide range of computer users. TSO/E users include system programmers, application programmers, information center administrators, information center users, TSO/E administrators, and others who access applications that run under TSO/E.
21
OS/390 Storage Concepts This chapter describes many of the OS/390 storage concepts that system programmers need to know to do their job. Many of the concepts needed by system programmers to do their job are as follows: · Address spaces, Subsystem definitions, Virtual storage layouts for address spaces, How storage in managed by OS/390, How processor storage is managed The initialization process begins when the system operator selects the LOAD function at the system console. MVS locates all of the usable central storage that is online and available to the system, and creates a virtual environment for the building of various system areas. Processor storage consists of central storage plus expanded storage. The system uses a portion of both central storage and virtual storage. To determine how much central storage is available to the installation, the system.s fixed storage requirements must be subtracted from the total central storage. The central storage available to an installation can be used for the concurrent execution of the paged-in portions of any installation programs. To tailor the system.s storage parameters, you need a general understanding of the system initialization and storage initialization processes. The system initialization process prepares the system control program and its environment to do work for the installation. The process essentially consists of: · System and storage initialization, including the creation of system component address spaces. · Master scheduler initialization and subsystem initialization. When the system is initialized and the job entry subsystem is active, the installation can submit jobs for processing by using the START, LOGON, or MOUNT command. In addition to initializing system areas, MVS establishes system component address spaces. MVS establishes an address space for the master scheduler (the master scheduler address space) and other system address spaces for various subsystems and system components. Some of the component address spaces are: · Program call/authorization for cross-memory communications · System trace · Global resource serialization · Dumping services.
22
OS/390 Address Spaces When you start OS/390, master scheduler initialization routines initialize system services such as the system log and communications task, and start the master scheduler address space, which becomes address space number one (ASID=1). Other system address spaces are then started during the initialization process of OS/390. Then the subsystem address spaces are started. The master scheduler starts the job entry subsystem (JES2 or JES3). JES is the primary job entry subsystem. Then other defined subsystems are started. All subsystems are defined in SYS1.PARMLIB, member IEFSSNxx. These subsystems are secondary subsystems. The visual shows four types of address spaces as follows: System The system address spaces are started following initialization of the master scheduler. These address spaces perform functions for all the other types of address spaces that start in an OS/390 system. Subsystem You cannot run OS/390 without a primary job entry subsystem, either JES2 or JES3. TSO logon These address spaces start when a user issues a logon to TSO/E. Each user executes in a separate address space. Batch job These address spaces are started by JES when a JCL stream is passed to JES and a job is created and then subsequently scheduled into execution.
23
Processor Storage Overview
Processor storage consists of central storage plus expanded storage. The system uses a portion of both central storage and virtual storage. To determine how much central storage is available to the installation, the system.s fixed storage requirements must be subtracted from the total central storage. The central storage available to an installation can be used for the concurrent execution of the paged-in portions of any installation programs. Note: Each installation is responsible for establishing many of the central storage parameters that govern RSM.s processing. Central Central storage often referred to as main storage, provides the system with directly addressable fast-access storage of data. Both data and programs must be loaded into central storage (from input devices) before they can be processed. Main storage may include one or more smaller faster-access buffer storages, sometimes called caches. A cache is usually physically associated with a CPU or an I/O processor. The effects, except on performance, of the physical construction and use of distinct storage media are not observable by the program. Expanded Expanded storage may be available on some models. Expanded storage, when available, can be accessed by all CPUs in the configuration by means of instructions that transfer 4 KB blocks of data from expanded storage to main storage or from main storage to expanded storage. Each 4 KB block in expanded storage is addressed by means of a 32-bit unsigned binary integer called an expanded-storage block number. CPU The central processing unit (CPU) is the controlling center of the system. It contains the sequencing and processing facilities for instruction execution, interruption action, timing functions, initial program loading and other machine-related functions. The physical implementation of the CPU may differ among models, but the logical function remains the same. The result of executing an instruction is the same for each model, providing that the program complies with the compatibility rules. The CPU, in executing instructions, can process binary integers and floating-point numbers of fixed length, decimal integers of variable length, and logical information of either fixed or variable length. Processing may be in parallel or in series; the width of the processing elements, the multiplicity of the shifting paths, and the degree of simultaneity in performing the different types of arithmetic differ from one CPU to another without affecting the logical results. Auxiliary An installation needs auxiliary direct access storage devices (DASD) for placement of all system data sets. Enough auxiliary storage must be available for the programs and data that comprise the system. Auxiliary storage used to support basic system requirements has three logical areas as follows: · System data set storage area · Paging data sets for backup of all pageable address spaces · Swap data sets used for LSQA pages and private area pages that are swapped in with the address space (also called the working set)
24
Storage managers In an OS/390 system, storage is managed by the following storage components managers: Real The real storage manager (RSM) controls the allocation of central storage during initialization and pages in user or system functions for execution. Some RSM functions: · Allocate central storage to satisfy GETMAIN requests for SQA and LSQA · Allocate central storage for page fixing · Allocate central storage for an address space that is to be swapped in · Allocate and initialize control blocks and queues related to expanded storage Virtual Each installation can use virtual storage parameters to specify how certain virtual storage areas are to be allocated. These parameters have an impact on central storage use and overall system performance. Auxiliary The auxiliary storage manager code controls the use of page and swap data sets. As a system programmer, you are responsible for: · Page and swap operations · Page and swap data set sizes · Space calculation · Performance of page and swap data sets · Estimating the total size of the paging data sets
25
Virtual Storage Manager
Virtual storage is managed by the virtual storage manager (VSM). Its main function is to distribute the virtual storage among all requests. Virtual storage is requested with the GETMAIN or STORAGE OBTAIN macro and returned to the virtual storage manager with the FREEMAIN or STORAGE RELEASE macro.
26
Virtual Storage Virtual storage is normally larger than main storage (called real storage in OS/390). The size of real storage depends on the CPU type. In a computing system without virtual storage, a program cannot be executed unless there is enough storage to hold it. In addition the complete storage used is allocated until it is finished. An OS/390 program resides in virtual storage and only parts of the program currently active need to be in real storage at processing time. The inactive parts are held in auxiliary storage, DASD devices, called page data sets. An active virtual storage page resides in a real storage frame. An inactive virtual storage page resides in a auxiliary storage slot. Moving pages between frames and slots is called paging. Estimating Virtual Storage Estimating the virtual storage allocated at an installation is important primarily because this storage must be backed up by central storage in some ratio (for example, 25%). This backup storage contributes significantly to an installation.s total central storage requirements. Virtual storage must also be backed up by expanded storage or auxiliary storage. Each installation can use virtual storage parameters to specify how certain virtual storage areas are to be allocated. These parameters have an impact on central storage use and overall system performance. Virtual Storage Address Space A two-gigabyte virtual storage address space is provided for: · The master scheduler address space · JES · Other system component address spaces, such as allocation, system trace, system management facilities (SMF), and dumping services · Each user (batch or TSO/E). The system uses a portion of each virtual address space. Each virtual address space consists of: · The common area below 16 megabytes · The private area below 16 megabytes · The extended common area above 16 megabytes · The extended private area above 16 megabytes.
27
Program Compile, Link-edit, and select into execution
Program execution An OS/390 system may appear to be one big block of code that drives your CPU. Actually, OS/390 is a complex system comprised of many different smaller blocks of code. Each of those smaller blocks of code perform a specific function within the system. Each system function is composed of one or more load modules. In an OS/390 environment, a load module represents the basic unit of machine-readable executable code. Load modules are created by combining one or more object modules and processing them with a link-edit utility. The link-editing of modules is a process that resolves external references and addresses. The functions on your system, therefore, are one or more object modules that have been combined and link-edited.
28
LLA and Module Search Order
1. Modules that were loaded under the current task (LLEs) 2. The job pack area (JPA) 3. Tasklib, steplib, joblib, or any libraries that were indicated by a DCB specified as an input parameter to the macro used to request the module (LINK, LINKX, LOAD, ATTACH, ATTACHX, XCTL or XCTLX). 4. Active link pack area (LPA), which contains the FLPA and MLPA 5. Pageable link pack area (PLPA) 6. SYS1.LINKLIB and libraries concatenated to it through the LNKLSTxx member of parmlib.
29
The LNKLST The LNKLST concatenation
The LNKLST concatenation is established at IPL time. It consists of SYS1.LINKLIB, followed by the libraries specified in the LNKLSTxx member(s) of SYS1.PARMLIB. The LNKLSTxx member is selected through the LNK parameter in the IEASYSxx member of the SYS1.PARMLIB. In addition, the system also automatically concatenates data sets SYS1.MIGLIB and SYS1.CSSLIB to SYS1.LINKLIB. The building of the LNKLST concatenation happens during an early stage in the IPL process, before any user catalogs are accessible, so only those data sets whose catalog entries are in the system master catalog may be included in the linklist. However, to include user cataloged data set in the LNKLST concatenation, you have to specify both the name of the data set and the volume serial number (VOLSER) of the DASD volume on which the data set resides. Note: The number of data sets that you can concatenate to form the LNKLST concatenation is limited by the total number of DASD extents the data sets will occupy. The total number of extents must not exceed 255. When the limit has been exceeded, the system writes error message IEA328E to the operator¢s console. These data sets are concatenated in the order in which they appear in the LNKLSTxx member(s), and the system creates a data extent block (DEB) that describes the data sets concatenated to SYS1.LINKLIB and their extents. This contains details of each physical extent allocated to the linklist. These extents remains in effect for the duration of the IPL. After this processing completes, the library lookaside (LLA) is started and manages the LNKLST data sets and can be used to control updates to them.
30
Library Lookaside (LLA)
The library lookaside (LLA) overview Library lookaside (LLA) is an address space that maintain a copy of the directory entries of the libraries that it manages. Since the entries are cached, the system does not need to read the data set directory entries to find out where a module is stored before fetching it from DASD. This greatly reduces I/O operations. The main purpose of using LLA is to improve the performance of module fetching on your system. How LLA improves performance LLA improves the module fetch performance in the following ways: 1. By maintaining (in the LLA address space) copies of the library directories the system uses to locate load modules. The system can quickly search the LLA copy of a directory in virtual storage instead of using costly I/O to search the directories on DASD. 2. By placing (or staging) copies of selected modules in a virtual lookaside facility (VLF) data space DCSVLLA when you define the LLA class to VLF, and start VLF. The system can quickly fetch modules from virtual storage, rather than using slower I/O to fetch the modules from the DASD. 3. By determining which modules, if staged, would provide the most benefit to module fetch performance. LLA evaluates modules as candidates for staging based on statistics it collects about the members of the libraries it manages (such as module size, frequency of fetches per module (fetch count), and the time required to fetch a particular module). If necessary, you can directly influence LLA staging decisions through installation exit routines (CSVLLIX1 and CSVLLIX2).
31
DFSMS. Storage Management Subsystem
32
Introduction to DFSMS Introduction to data management
Data management is the part of the operating system that organizes, identifies, stores, catalogs, and retrieves all the data information (including programs) that your installation uses. Data management does these main tasks: · Sets aside (allocates) space on DASD volumes · Automatically retrieves cataloged data sets by name · Controls access to data One of the elements of data management is the access methods component, to be described in the next visuals. This chapter describes MVS data management when processing different types of data sets. Also, some comments about how you should name data sets are included. DFSMS/MVS is a set of products associated with OS/390 that is responsible for data management. DFSMS/MVS has four MVS data management functional components as a single, integrated software package:
33
DFSMS environment
34
DFSMS dfp hsm dss DFSMS rmm data facilities data set services
removable media management dss data set services hsm hierarchical storage management DFSMSdfp Provides storage, data, program, and device management. It is made of several components such as access methods, OPEN/CLOSE/EOV routines, catalog management, DADSM (DASD space control), utilities, IDCAMS, SMS, NFS, ISMF, and other functions. DFSMSdss Provides data movement, copy, backup, and space management functions. DFSMShsm Provides backup, recovery, migration, and space management functions. It invokes DFSMSdss for certain of its functions. DFSMSrmm Provides management functions for removable media such as tape cartridges, 3420 reels, and optical media Before we discuss DFSMS/MVS components, let.s briefly talk about data sets, data organization, volume organization, and data management.
35
Data Sets Dataset.abc Dataset.def Dataset.ghi Data sets
An MVS data set is a collection of logically related data records stored on one volume or a set of volumes. A data set can be, for example, a source program, a library of macros, or a file of data records used by a processing program. You can print a data set or display it on a terminal. The logical record is the basic unit of information used by a processing program. As an exception, the OS/390 UNIX services component supports Hierarchical File Systems (HFS) data sets, where the collection is of bytes and there is not the concept of logically related data records. Data can be stored on a direct access storage device (DASD), magnetic tape volume, or optical media. The term “DASD” applies to disks or simulated equivalents of disks. All types of data sets can be stored on DASD, but only sequential data sets can be stored on magnetic tape. We discuss the types of data sets later. The next visuals discuss the logical attributes of a data set which are specified at data set allocation time in: · DCB/ACB control blocks in the application program · DD card (explicitly or through Data Class(DC) option) · In ACS Data Class (DC) routine (overridden by DD card) After the allocation, such attributes are kept in catalogs and VTOCs.
36
Data Sets name rules Data set name rules
Whenever you allocate a new data set, you (or MVS) must give the data set an unique name. Usually, the data set name is given as the DSNAME keyword in JCL. A data set name can be one name segment, or a series of joined name segments. Each name segment represents a level of qualification. For example, the data set name VERA.LUZ.DATA is composed of three name segments. The first name on the left is called the highest-level qualifier, the last is the lowest-level qualifier. Each name segment (qualifier) is one to eight characters, the first of which must be alphabetic (A to Z) or national $). The remaining seven characters are either alphabetic, numeric (0-9), national, or a hyphen (-). The period (.) separates name segments from each other. Including all name segments and periods, the length of the data set name must not exceed 44 characters. Thus, a maximum of 22 name segments can make up a data set name. You should only use the low-level qualifier GxxxxVyy, structure name, where xxxx and yy are numbers, in the names of generation data sets (to be seen later). You can define a data set with GxxxxVyy as the low-level qualifier of non-generation data sets only if a generation data group with the same base name does not exist. However, we recommend that you restrict GxxxxVyy qualifiers to generation data sets, to avoid confusing generation data sets with other types of non-VSAM data sets.
37
Record Format (RECFM) Logical record length (LRECL) Block Size (BLKSIZE)
Logical records A Logical Record is a unit of information about a unit of processing (a customer, an account, a payroll employee). It is the smallest amount of data to be processed and it is made of fields which contain information recognized by the processing application. Logical records when located in DASD, tape, or optical devices are grouped in physical records named blocks. Each block of data on a DASD volume has a distinct location and a unique address, making it possible to find any block without extensive searching. Logical records can be stored and retrieved either directly or sequentially. DASD volumes are used for storing data and executable programs, including the operating system itself, and for temporary working storage. One DASD volume can be used for many different data sets, and space on it can be reallocated and reused. The maximum length of a logical record (LRECL) is limited by the physical size of the used media. Record formats Use the RECFM parameter to specify the format and characteristics of the logical records in a new data set. We may say that they are blocked (several logical records in one block), or no imbedded short blocks, or the existence of an ANSI control character, and so on. For further information on the REFCM parameter, refer to DFSMS/MVS Using Data Sets, SC , and OS/390: MVS JCL Reference, GC
38
Data set organization (DSORG)
There are several different types of data set organization used in OS/390. Each organization provides specific benefits to its user: Physical sequential (PS) Sequential data sets can exist in DASD, tape, and optical devices. Partitioned Organized (PO) Partitioned data sets are similar in organization to a library and are often referred to this way. A library contains normally a great number of “books,” and sorted directory entries are used to locate them. In PDS (partitioned organized data set) the “books” are called members and to locate them, they are pointed to by entries in a directory, as shown in this visual. The members are individual sequential data sets and can be read or written sequentially, once they have been located via directory. It is almost the same idea as the directory and file organization in a PC. Partitioned data sets can only exist on DASD. Each member has a unique name, one to eight characters long, stored in a directory that is part of the data set. The records of a given member are written or retrieved sequentially. The main advantage of using a partitioned data set is that, without searching the entire data set, you can retrieve any individual member after the data set is opened. For example, in a program library (always a partitioned data set) each member is a separate program or subroutine. The individual members can be added or deleted as required. When a member is deleted, the member name is removed from the directory, but the space used by the member cannot be reused until the data set is reorganized; that is, compressed using the IEBCOPY utility (generally requested through an ISPF panel). We discuss IEBCOPY and other DFSMS/MVS utilities later. The directory, a series of 256-byte records at the beginning of the data set, contains an entry for each member. Each directory entry contains the member name and the starting location of the member within the data set, as shown. Also, you can specify as many as 62 bytes of information in the entry. The directory entries are arranged by name in alphanumeric collating sequence. Each directory block contains a two-byte count field that specifies the number of active bytes in a block (including the count field). Each block is preceded by a hardware defined key field containing the name of the last member entry in the block, that is, the member name with the highest binary value. Partitioned data set member entries vary in length and are blocked into the member area. If you do not specify a block size (BLKSIZE), the Open routine determines an optimum block size for you. Therefore, you no longer need to perform calculations based on track length. When you allocate space for your data set, you can specify the average record length in kilobytes or megabytes by using the SPACE and AVGREC parameters, and have the system use the block size it calculated for your data set. Another type of PO data set is the PDSE, that must be SMS-managed and we will talk about its advantages later.
39
Major DFSMS/MVS access methods
Basic Direct Access Method (BDAM) arranges records in any sequence your program indicates, and retrieves records by actual or relative address. If you do not know the exact location of a record, you can specify a point in the data set where a search for the record is to begin. Data sets organized this way are called direct data sets. IBM does not recommended using BDAM because it tends to require using device-dependent code. In addition, using keys is much less efficient than in virtual sequential access method (VSAM). BDAM is supported by DFSMS/MVS only to enable compatibility with other IBM operating systems. Appendix C, “Processing Direct Data Sets” in DFSMS/MVS Using Data Sets, SC Object Access Method (OAM) processes very large named byte streams (objects) that have no record boundary or other internal orientation. These objects can be recorded in a DB2 data base or on an optical storage volume. For information on OAM, see DFSMS/MVS Object Access Method Application Programmer. s Reference, SC , and DFSMS/MVS Object Access Method Planning, Installation, and Storage Administration Guide for Object Support, SC
40
BPAM to access PDS and PDSE
Basic Partitioned Access Method (BPAM) arranges records as members of a partitioned data set (PDS) or a partitioned data set extended (PDSE) on DASD. You can view each member like a sequential data set. A partitioned data set or PDSE includes a directory that relates member names to locations within the data set. The directory is used to retrieve individual members, and for program libraries (load modules and program objects) contains program attributes required to load and re-bind the member.
41
PDS and PDSE data organizations
Partitioned data set (PDS) is an old MVS data organization, which has good features such as: · Easier management: Grouping of related data sets under a single name makes MVS data management easier. Files stored as members of a PDS can be processed either individually or all the members can be processed as a unit. · Space savings: Small members fit in just one DASD track. · Good usability: Members of a PDS can be used as sequential data sets, and they can be concatenated to sequential data sets. They are also easy to create with JCL, or ISPF; they are easy to manipulate with ISPF utilities or TSO commands. However, there are a few requirements for improvement regarding the PDS organization: · There is no mechanism to reuse the area which contained a deleted or re-written member. This unused space must be reclaimed by the use of the IEBCOPY utility function called compression. · Directory size is not expandable, causing an overflow exposure. The area for members may grow using secondary allocations. This is not true for the directory. · A PDS has no mechanism to stop the directory from being overwritten if a program mistakenly opens it for sequential output. If it happens, the directory is destroyed, and all the members are lost. Also, PDS DCB attributes can be easily changed by mistake. If you add a member whose DCB characteristics differ from those of the other members, you will change the DCB attributes of the entire PDS, and all the old members will become unusable. · Better directory search time. Entries in the directory are physically ordered by the collating sequence of the names in the members they are pointing to. Any inclusion may cause the full rearrange of the entries. There is also no index to the directory entries. The search is sequential using a CKD format. If the directory is big, the I/O operation takes more time. · Improved sharing facilities. To update a member of a PDS, you need exclusive access to the entire data set. All these improvements require almost total compatibility at program and user level with the old PDS.
42
Structure of a PDS Each member has a unique name, characters long, stored in a directory that is part of the data set. The records of a given member are written or retrieved sequentially. See OS/390 DFSMS Macro Instructions for Data Sets for the macros used with PDSs. The main advantage of using a PDS is that, without searching the entire data set, you can retrieve any individual member after the data set is opened. For example, in a program library that is always a PDS, each member is a separate program or subroutine. The individual members can be added or deleted as required. When a member is deleted, the member name is removed from the directory, but the space used by the member cannot be reused until the data set is reorganized; that is, compressed using the IEBCOPY utility. The directory, a series of 256-byte records at the beginning of the data set, contains an entry for each member. Each directory entry contains the member name and the starting location of the member within the data set. Also, you can specify as many as 62 bytes of information in the entry. The directory entries are arranged by name in alphanumeric collating sequence. The starting location of each member is recorded by the system as a relative track address (from the beginning of the data set) rather than as an absolute track address. Thus, an entire data set that has been compressed can be moved without changing the relative track addresses in the directory. The data set can be considered as one continuous set of tracks regardless of where the space was actually allocated. If there is not sufficient space available in the directory for an additional entry, or not enough space available within the data set for an additional member, or no room on the volume for additional extents, no new members can be stored. A directory cannot be extended and a PDS cannot cross a volume boundary.
43
A PDS Directory Entry Each member entry contains a member name or an alias. Each entry also contains the relative track address of the member and a count field. It can also contain a user data field. The last entry in the last used directory block has a name field of maximum binary value (all 1s, a TTR field of zeros, and a zero-length user data field). Member Name - Specifies the member name or alias. It contains as many as 8 alphanumeric characters, left-justified, and padded with blanks if necessary. TTR - Is a pointer to the first block of the member. TT is the number of the track, starting from 0 for the beginning of the data set, and R is the number of the block, starting from 1 for the beginning of that track. C - Specifies the number of halfwords contained in the user data field. It can also contain additional information about the user data field, as shown below: The operating system supports a maximum of three pointers in the user data field. Additional pointers can be contained in a record called a note list discussed in the following note. The pointers can be updated automatically if the data set is moved or copied by a utility program such as IEHMOVE. The data set must be marked unmovable under any of the following conditions: More than three pointers are used in the user data field. The pointers in the user data field or note list do not conform to the standard format.
44
PDSE structure PDSE structure
The advantages of PDSE when compared with PDS are: · Space is reclaimed without a compress. PDSE automatically reuses space, without needing an IEBCOPY compress. A list of available space is kept in the directory. When a PDSE member is updated or replaced, it is written in the first available space. This is either at the end of the data set or in a space in the middle of the data set marked for reuse. This space need not be contiguous. The objective of the space reuse algorithm is not to extend the data set unnecessarily. · The directory can grow dynamically as the data set expands. Logically, a PDSE directory looks the same as a PDS directory. It consists of a series of directory records in a block. Physically, it is a set of pages at the front of the data set, plus additional pages interleaved with member pages. Five directory pages are initially created at the same time as the data set. New directory pages are added, interleaved with the member pages, as new directory entries are required. A PDSE always occupies at least five pages of storage. The directory is like a KSDS index structure, making a search much faster. It cannot be overwritten by being opened for sequential output. · If you try to add a member with DCB characteristics that differ from the rest of the members, you will get an error. · You can open a PDSE member for output or update, without locking the entire data set. The sharing control is at member level, not the data set level. There is a restriction about PDSEs, that is, you cannot use a PDSE for certain system data sets which are opened at the IPL/NIP time frame.
45
Processing a PDSE PDSEs have several features that improve both your productivity and system performance. The main advantage of using a PDSE over a PDS is that PDSEs automatically reuse space within the data set without anyone having to periodically run a utility to reorganize it. The size of a PDS directory is fixed regardless of the number of members in it, while the size of a PDSE directory is flexible and expands to fit the members stored in it. Also, the system reclaims space automatically whenever a member is deleted or replaced, and returns it to the pool of space available for allocation to other members of the same PDSE. The space can be reused without having to do an IEBCOPY compress. Other advantages of PDSEs are: PDSE members can be shared. This makes it easier to maintain the integrity of the PDSE when modifying separate members of the PDSE at the same time. Reduced directory search time. The PDSE directory, which is indexed, is searched using that index. The PDS directory, which is organized alphabetically, is searched sequentially. The system might cache in storage directories of frequently used PDSEs. Creation of multiple members at the same time. For example, you can open two DCBs to the same PDSE and write two members at the same time. PDSEs contain up to 123 extents. An extent is a continuous area of space on a DASD storage volume, occupied by or reserved for a specific data set. When written to DASD, logical records are extracted from the user’s blocks and reblocked. When read, records in a PDSE are reblocked into the block size specified in the DCB. The block size used for the reblocking can differ from the original block size.
46
PDSE and PDS Differences
PDSE Characteristics Data set has a 123-extent limit. Directory is expandable and indexed by member name; faster to search directory. PDSEs are device-independent: records are reblockable and the TTR is simulated as a system key. Uses dynamic space allocation and reclaim. It is highly recommended that a PDSE be allocated with secondary space to permit the dynamic variation in the size of the PDSE index. You can create multiple members at the same time. PDS Characteristics Data set has a 16-extent limit. Fixed size directory is searched sequentially. For PDSs, TTR addressing and block sizes are device-dependent. Must use IEBCOPY COMPRESS to reclaim space. You can create one member at a time.
47
Sequential access methods
There are two sequential access methods, Basic Sequential Access Method (BSAM) and Queued Sequential Access Method (QSAM). Both access data organized in a physical sequenced manner. In this manner the physical records (containing logical records) are stored sequentially in the order in which they are entered. One special sort of this organization is called Extended Format Data Set. Extended format data sets have a different internal storage format from a sequential data set that is not extended (fixed block with a 32-bytes suffix). This storage format gives extended format data sets additional usability and availability characteristics: · Can be allocated in the compressed format (can be referred to as a compressed format data set). A compressed format data set is a type of extended format data set that has an internal storage format that allows for data compression. · Allows data stripping, that is a multivolume sequential file where data may be accessed in parallel. · Is able to recover from an padding error situation. Extended format data sets must be SMS-managed and must reside on DASD. You cannot use an extended format data set for certain system data sets. Another type of this organization is called Hierarchical File System. HFS files are POSIX-conforming files which reside in an HFS data set. They are byte-oriented rather than record-oriented, as are MVS files. They are identified and accessed by specifying the path leading to them. Programs can access the information in HFS files through OS/390 UNIX system calls, such as open(pathname), read(file descriptor), and write(file descriptor). Programs can also access the information in HFS files through the MVS BSAM, QSAM, and VSAM access methods. When using BSAM or QSAM, an HFS file is simulated as a multi-volume sequential data set. When using VSAM, an HFS file is simulated as an ESDS. HFS data sets are: · Supported by standard DADSM create, rename, and scratch · Supported by DFSMShsm for dump/restore and migrate/recall if DFSMSdss is used as the data mover · Not supported by IEBCOPY or the DFSMSdss COPY function The difference between QSAM and BSAM are: · QSAM deblocks logical records and does look ahead reads (anticipates reads). In BSAM these tasks are done by the calling program. · QSAM synchronizes the task with I/O operation (places the task in wait along the I/O operation). In BSAM this tasks is done by the calling program (macro CHECK).
48
Virtual Storage Access Method (VSAM)
VSAM is an access method service used to organize data and maintain information about the data in a catalog. There are two major parts of VSAM: · Catalog Management: The Catalog contain information about the data sets · Record management: VSAM can be used to organize records into four types of data sets: - Key-sequenced (KSDS) - Entry-sequenced (ESDS) - Linear (LDS) - Relative record with fixed or variable length (RRDS) The primary difference among these types of data sets is the way in which their records are stored and accessed. VSAM arranges records by an index key, by relative byte address, or by relative record number. Data organized by VSAM is cataloged for easy retrieval and is stored in one of four types of data sets.
49
Generation data groups (GDG)
You can catalog successive updates or generations of related data sets. They are called generation data groups (GDG). Each data set within a GDG is called a generation data set or generation. Within a GDG, the generations can have like or unlike DCB attributes and data set organizations. If the attributes and organizations of all generations in a group are identical, the generations can be retrieved together as a single data set. Generation data sets can be sequential, direct, or indexed sequential (an old and less used data set organization, replaced by VSAM KSDS). They cannot be partitioned, HFS, or VSAM. The same GDG may contain SMS and non-SMS data sets. There are advantages to grouping related data sets. For example, the catalog management routines can refer to the information in a special index called a generation index in the catalog. Thus: · All of the data sets in the group can be referred to by a common name. · The operating system is able to keep the generations in chronological order. · Outdated or obsolete generations can be automatically deleted by the operating system. Another advantage is the ability to reference to a new generation using the same JCL. Generation data sets have sequentially ordered absolute and relative names that represent their age. The catalog management routines use the absolute generation name. Older data sets have smaller absolute numbers. The relative name is a signed integer used to refer to the latest (0), the next to the latest (-1), and so forth, generation. For example, a data set name LAB.PAYROLL(0) refers to the most recent data set of the group; LAB.PAYROLL(-1) refers to the second most recent data set; and so forth. The relative number can also be used to catalog a new generation (+1). If you create a generation data set with a relative generation number of (+1), the system recognizes any subsequent reference to (+1) throughout the job as having the same absolute generation number. A GDG base is allocated in an integrated catalog facility or VSAM catalog before the generation data sets are cataloged. Each GDG is represented by a GDG base entry. Use the AMS DEFINE command to allocate the GDG base. The model DSCB must exist on the GDG catalog volume.
50
Defining a generation data group
//DEFGDG1 JOB ... //STEP1 EXEC PGM=IDCAMS //GDGMOD DD DSNAME=GDG01,DISP=(,KEEP), // SPACE=(TRK,(0)),UNIT=DISK,VOL=SER=VSER03, // DCB=(RECFM=FB,BLKSIZE=2000,LRECL=100) //SYSPRINT DD SYSOUT=A //SYSIN DD * DEFINE GENERATIONDATAGROUP - (NAME(GDG01) - NOEMPTY - NOSCRATCH - LIMIT(255) ) /* Defining a generation data group The DEFINE GENERATIONDATAGROUP command creates a catalog entry for a generation data group (GDG). The DEFINE GENERATIONDATAGROUP command defines a GDG base catalog entry GDG01. Its parameters are: · NAME specifies the name of the GDG, GDG01. Each GDS in the group will have the name GDG01.GxxxxVyy, where xxxx is the generation number and yy is the version number. · NOEMPTY specifies that only the oldest generation data set is to be uncataloged when the maximum is reached (recommended). · EMPTY specifies that all data sets in the group are to be uncataloged by VSAM when the group reaches the maximum number of data sets (as specified by the LIMIT parameter) and one more GDS is added to the group. · NOSCRATCH specifies that when a data set is uncataloged, its DSCB is not to be removed from its volume.s VTOC. Therefore, even if a data set is uncataloged, its records can be accessed when it is allocated to a job step with the appropriate JCL DD statement. · LIMIT specifies that the maximum number of GDG data sets in the group is 255. The LIMIT parameter is required. Next, a generation data set is defined within the GDG by using JCL statements. //DEFGDG2 JOB ... //STEP1 EXEC PGM=IEFBR14 //GDGDD1 DD DSNAME=GDG01(+1),DISP=(NEW,CATLG), // SPACE=(TRK,(10,5)),VOL=SER=VSER03,UNIT=DISK //SYSPRINT DD SYSOUT=A //SYSIN DD * /* The job DEFGDG2, allocates space and catalogs a GDG data set in the newly-defined GDG. The job control statement GDGDD1 DD specifies the GDG data set in the GDG.
51
Absolute generation and version numbers
An absolute generation and version number is used to identify a specific generation of a generation data group. A same generation data set may have different versions, which are maintained by your installation. The version number allows you to perform normal data set operations without disrupting the management of the generation data group. For example, if you want to update the second generation in a three-generation group, replace generation 2, version 0, with generation 2, version 1. Only one version is kept for each generation. The generation and version number are in the form GxxxxVyy, where xxxx is an unsigned four-digit decimal generation number (0001 through 9999) and yy is an unsigned two-digit decimal version number (00 through 99). For example: · A.B.C.G0001V00 is generation data set 1, version 0, in generation data group A.B.C. · A.B.C.G0009V01 is generation data set 9, version 1, in generation data group A.B.C. The number of generations and versions is limited by the number of digits in the absolute generation name; that is, there can be 9,999 generations. Each generation can have 100 versions. The system automatically maintains the generation number. The number of generations kept depends on the size of the generation index. For example, if the size of the generation index allows ten entries (parameter LIMIT in AMS DEFINE), the ten latest generations can be maintained in the generation data group (parameter NOEMPTY in AMS DEFINE). You can catalog a generation using either absolute or relative numbers. When a generation is cataloged, a generation and version number is placed as a low-level entry in the generation data group. To catalog a version number other than V00, you must use an absolute generation and version number.
52
Relative generation number
As an alternative to using absolute generation and version numbers when cataloging or referring to a generation, you can use a relative generation number. To specify a relative number, use the generation data group name followed by a negative integer, a positive integer, or a zero (0), enclosed in parentheses. For example, A.B.C(-1). A.B.C(+1), or A.B.C(0). The value of the specified integer tells the operating system what generation number to assign to a new generation data set, or it tells the system the location of an entry representing a previously cataloged old generation data set. When you use a relative generation number to catalog a generation, the operating system assigns an absolute generation number and a version number of V00 to represent that generation. The absolute generation number assigned depends on the number last assigned and the value of the relative generation number that you are now specifying. For example if, in a previous job generation, A.B.C.G0006V00 was the last generation cataloged, and you specify A.B.C(+1), the generation now cataloged is assigned the number G0007V00. Though any positive relative generation number can be used, a number greater than 1 can cause absolute generation numbers to be skipped for a new generation data set. For example, if you have a single step job, and the generation being cataloged is a +2, one generation number is skipped. However, in a multiple step job, one step might have a +1 and a second step a +2, in which case no numbers are skipped. Rolled in and rolled off When a generation data group contains its maximum number of active generation data sets, defined in the LIMIT parameter, and a new generation data set is rolled in at end-of-job step, the oldest generation data set is rolled off and is no longer active. If a generation data group is defined using DEFINE GENERATIONDATAGROUP EMPTY, and is at its limit, then, when a new generation data set is rolled in, all the currently active generation data sets are rolled off. The parameters you specify on the DEFINE GENERATIONDATAGROUP command determines what happens to rolled off generation data sets. For example, if you specify the SCRATCH parameter, the generation data set is scratched when it is rolled off. If you specify the NOSCRATCH parameter, the rolled off generation data set is re-cataloged as rolled off and is disassociated with its generation data group. Generation data sets can be in a deferred roll-in state if the job never reached end-of-step or if they were allocated as DISP=(NEW,KEEP) and the data set is not system-managed. Generation data sets in a deferred roll-in state can be referred to by their absolute generation numbers. You can use the access method service command ALTER ROLLIN to roll in these generation data sets.
53
Introduction to ICF Introduction to ICF An integrated catalog facility has two components, a VSAM volume data set (VVDS) and a basic catalog structure (BCS), the following topics explain these components in more detail. Basic catalog structure (BCS) The basic catalog structure is a VSAM key-sequenced data set. It uses the data set name as a key to store and retrieve data set information. For VSAM data sets, the BCS contains volume, security, ownership, and association information. For non-VSAM data sets, the BCS contains volume ownership, and association information. In other words the BCS portion of the ICF catalogs contains the static information about the data set, the information that changes very seldom. For non-VSAM data sets that are not SMS-managed all their catalog information is only contained in the BCS. For the other types of data sets, there is other information available in the VVDS. Related information in the BCS is grouped into logical, variable-length, spanned records related by key. The BCS uses keys that are the data set names (plus one character for extensions). A control interval can contain multiple BCS records. To reduce the number of I/Os necessary for catalog processing, logically-related data is consolidated in the BCS. VSAM volume data set (VVDS) The VVDS is a VSAM entry-sequenced data set (ESDS) that has a 4 KB control interval size. It contains additional catalog information (not contained in the BCS) about the VSAM and SMS-managed non-VSAM data sets residing on the volume where the VVDS is located. Every volume containing any VSAM or any SMS-managed data sets must have a VVDS in it. In a sense, we may say that the VVDS is a sort for VTOC extension for certain type of data sets. A VVDS may have data set information about data sets cataloged in distinct BCSs. VVDs contains the data set characteristics, extent information, and the volume-related information of the VSAM data sets cataloged in the BCS. If you are using the storage management subsystem (SMS), the VVDS also contains data set characteristics and volume-related information for the non-VSAM, SMS-managed data sets on the volume. As you can see the type of information retained in VVDS is more frequently modified or more volatile than the one in BCS. A VVDS is recognized by the restricted data set name SYS1.VVDS.Vvolser, where volser is the volume serial number of the volume on which the VVDS resides. You can explicitly (via IDCAMS) define the VVDS, or it is implicitly created after you define the first VSAM data set in the volume or the first non-VSAM SMS-managed data set. An explicitly defined VVDS is not related to any BCS until a data set or VSAM object is defined on the volume. As data sets are allocated on the VVDS volume, each BCS with VSAM or SMS-managed data sets residing on that volume is related to the VVDS. An explicit definition of a VVDS does not update any BCS and, therefore, can be performed before the first BCS in the installation is defined. Explicitly defining a VVDS is usually appropriate when you are initializing a new volume. If you are not running SMS, and a volume already contains some non-VSAM data sets, it is appropriate to allow the VVDS to be defined implicitly with the default space allocation of TRACKS(10 10). The VVDS is composed of a minimum of two records: · A VSAM volume control record (VVCR), · A VVDS self-describing volume record The first logical record in a VVDS is the VSAM volume control record (VVCR). It contains information for management of DASD space and the BCS names which currently have cataloged VSAM or SMS-managed non-VSAM data sets on the volume. It might have a pointer to an overflow VVCR. The second logical record in the VVDS is the VVDS self-describing VVR (VSAM volume record). This self-describing VVR contains information that describes the VVDS. The remaining logical records in the VVDS are VVRs for VSAM objects or non-VSAM volume records (NVRs) for SMS-managed non-VSAM data sets. The hexadecimal RBA of the record is used as its key or identifier. VSAM volume records (VVR) VSAM volume records contain information about the VSAM data sets residing on the volume with the VVDS. The number of VVRs for VSAM data sets varies according to the type of data set and the options specified for the data set. Non-VSAM volume record (NVR) The non-VSAM volume record (NVR) is equivalent to a VVR record, but the NVR record is for SMS-managed non-VSAM data sets. The NVR contains SMS-related information.
54
The Master catalog Catalogs by function
By function the catalogs can be classified as Mastercat (master catalog) and Usercats (user catalogs). A particular case of Usercat is the Volcat, that is a Usercat containing only tape library and tape volume entries. There is no structural difference between a Mastercat and a Usercat. What makes a Mastercat different is how it is used, and what data sets are cataloged in it. Master catalog Each system has one active Mastercat. The Mastercat does not have to reside on the system residence volume. For performance, recovery, and reliability, we recommend that you only use integrated catalog facility catalogs. Also, a Mastercat can be shared between different MVS images. The Mastercat for a system must contain entries for all Usercats (and their aliases), which the system uses. The only other data sets which you should catalog in the Mastercat are the system, or SYS1 data sets. These data sets must be cataloged in the Mastercat for proper system initialization. The Mastercat at system initialization During a system initialization, the Mastercat is read so that system data sets and Usercats can be located. Their catalog entries are placed in the in-storage catalog cache as they are read. Catalog aliases are also read during system initialization, but they are placed in an alias table separate from the in-storage catalog. Thus, if the Mastercat only contains entries for system data sets, catalogs, and catalog aliases, the entire Mastercat is in main storage by the completion of the system initialization. Identifying the Mastercat At IPL, you must indicate the location (Volser and data set name) of the Mastercat, this can be done by: · SYSCATxx member of SYS1.NUCLEUS data set or the default member name that is SYSCATLG (also in SYS1.NUCLEUS). · LOADxx member of SYS1.PARMLIB / SYS1.IPLPARM. We recomend this method.
55
Using aliases Usercats
An ICF Usercat has the same structure as an ICF Mastercat. The difference is in the Usercat function. Mastercat should be used to contain information about system data sets (SYS1.) and pointers to Usercats. Usercats should be used to contain information about your installation cataloged data sets, this is implemented through the alias concept. Using aliases The way to tell catalog management in which Usercat your data set is cataloged is through aliases. You define an appropriate alias name for an Usercat in the Mastercat. Next, you match the highest-level qualifier (HLQ) of your data set with the alias. This, identifies the appropriate Usercat to be used to satisfy the request. In this visual all data sets with an HLQ of PAY, have their information in the Usercat UCAT1, because in the Mastercat there is an alias PAY pointing to UCAT1. The ones with DEPT1 and DEPT2 have their information in the Usercat UCAT2, because in the Mastercat there are aliases DEPT1 and DEPT2 pointing to UCAT2. Note that aliases can also be used with non-VSAM data sets in order to create alternate names to the same data set.
56
Catalog search order Catalog search order
Most catalog searches should be based on catalog aliases. Some alternatives to catalog aliases are available for directing a catalog request, specifically the JOBCAT and STEPCAT DD statements, the CATALOG parameter of access method services, and the name of the catalog. Search order for catalogs For the system to determine where a data set is to be cataloged, the following search order is used to find the catalog: 1. If the IDCAMS Define (creation and cataloging) statement is used and the CATALOG parameter (directs the search to an specific catalog) is indicated, then use this referred catalog 2. Otherwise, use the catalog named in the STEPCAT DD statement 3. If no STEPCAT, use the catalog named in the JOBCAT DD statement 4. If no JOBCAT, and the HLQ is a catalog alias, use the catalog identified by the alias or the catalog whose name is the same as the same HLQ of the data set 5. If no catalog has been identified yet, use the Mastercat. Search order for locating The following search order is used to locate the catalog for an already-cataloged data set: 1. If at any IDCAMS invocation where there is a need to locate a data set and the CATALOG parameter (directs the search to an specific catalog) is indicated, then use this referred catalog. If the data set is not found, fail the job. 2. Otherwise, search all catalogs specified in a STEPCAT DD statement in order. 3. If not found, search all catalogs specified in a JOBCAT DD statement in order. 4. If not found, and the HLQ is an alias for a catalog, search the catalog; or if the HLQ is the name of a catalog, search the catalog. If the data set is not found, fail the job. 5. Otherwise, search the Mastercat Note: For SMS-managed data sets, JOBCAT and STEPCAT DD statements are not allowed and cause a job failure. Also, they are not recommended even for non-SMS data sets. So, do not use them. To use an alias to identify the catalog to be searched, the data set or object name, or the generation data group base name, must be a qualified name. When you specify a catalog in the IDCAMS CATALOG parameter, and you have appropriate RACF authority to the FACILITY class profile STGADMIN.IGG.DIRCAT, the catalog you specify is used. For instance: DEFINE CLUSTER (NAME(PROD.PAYROLL) CATALOG(SYS1.MASTER.ICFCAT)) defines a data set PROD.PAYROLL to be cataloged in SYS1.MASTER.ICFCAT. You can use RACF to prevent the use of the CATALOG parameter and restrict the ability to define data sets in the Mastercat.
57
Locating a Data Sets Locating a data set
Before we explain the procedure used to find a data set, let.s introduce some terms used and that will be explained in more detail later. VTOC Is a sequential data set located in a DASD volume that describes the contents of this volume. User Catalogs (UCAT) It is a catalog of data sets used to locate in which DASD volume the requested data set is stored; user data sets are are cataloged in this type of catalog. Master Catalog (MCAT) It has the same structure as a user catalog, but points to system data sets, usually with a high level qualifier (HLQ) name of SYS1. It also contains information about the user catalog location and any alias pointer. Alias It is an special entry in the Master Catalog pointing to an User Catalog which coincides with the HLQ of a data set. It means that the data set with this HLQ is maybe cataloged in that User Catalog. Then, the alias is used to find in which User Catalog there is that data set location information. Follows the standard location sequence caused by a request for an already existent data set: · MCAT is searched, if found, verify if it is: - A data set name, then pick up the volume specification and if the indicated device is online, then check VTOC to locate the data set in the specified volume. - An alias, that is, the HLQ of the data set name is equal to an alias entry pointing to an UCAT, in this case go to the referred UCAT. · UCAT is searched (if there is a match in the alias). If the data set name is found, proceed as in an MCAT hit. Finally, the requiring program can access the data set. There is another way of using catalogs, where you do not follow the standard location sequence. It is by the use of DD cards named STEPCAT and JOBCAT introducing private catalogs. Use the JOBCAT DD statement to define a private VSAM or user catalog for the duration of a job (step for STEPCAT). The system searches the private catalog for data sets before it searches the master catalog or a user catalog associated with the first qualifier of a data set.s name. It is not recommended that you use private catalogs. One of the reasons is that for SMS-managed data sets SMS only accesses SMS-managed data sets that are cataloged in a system catalog.
58
Cataloged and uncataloged data sets
When the data set is cataloged, the system obtains unit and volume information from the catalog. However, if the DD statement for a catalog data set contains VOLUME=SER=serial-number, the system does not look in the catalog; in this case, you must code the UNIT parameter. When your data set is not cataloged you must know in advance its volume location and specify it in your JCL. This can be done through the UNIT and VOL=SER as shown in this visual. See OS/390: MVS JCL Reference, GC , for information about the UNIT and VOL parameters. We strongly recommend that you do not have uncataloged data sets in your installation because uncataloged data sets can cause problems with duplicate data and possible incorrect data set processing.
59
Volume Table of Contents (VTOC)
The VTOC is a data set that describes the contents of the DASD volume on which it resides. It is a contiguous data set; that is, it resides in a single extent on the volume. It is pointed at by the record in the first track of the volume. Data is organized in physical blocks preceded by the highest record key in the block. That is, a count-key-data format. The VTOC has six types of control blocks, they are called DSCB and describe data set characteristics, free space, and other functions that we will see in the next visuals. There are a set of macros called Common VTOC Access Facility (CVAF) which allow a program to access VTOC information data.
60
Data set control block (DSCB)
Data set control block (DSCB) types DSCB is the name of the logical record within the VTOC. DSCBs describe data sets allocated in that volume and also describe the VTOC itself. The system automatically constructs a DSCB when space is requested for a data set on a direct access volume. Each data set on a DASD volume has one or more DSCBs to describe its characteristics. The DSCB appears in the VTOC and, in addition to space allocation and other control information, contains operating system data, device-dependent information, and data set characteristics. There are seven kinds of DSCBs, each with different purpose and a different format number. The first record in every VTOC is the VTOC DSCB (format-4). The record describes the device, the volume the data set resides on, the volume attributes, and the size and contents of the VTOC data set. The next DSCB in the VTOC data set is a free-space DSCB (format-5) even if the free space is described by format-7 DSCBs. The third and subsequent DSCBs in the VTOC can occur in any order.
61
Index VTOC structure Index VTOC structure
The Index VTOC may provide poor performance, mainly when many data sets are located in that volume. The major reason is the lack of an index to speed up the search. Optionally an index VTOC can be associated with the VTOC. The index VTOC enhances the performance of VTOC access. The VTOC index is a physical-sequential data set on the same volume as the related VTOC. It consists of an index of data set names in format-1 DSCBs contained in the VTOC and volume free-space information. Note: An SMS-managed volume requires an indexed VTOC; otherwise, the VTOC index is highly recommended In MVS/DFP 3.3 the index VTOC was improved to make more efficient use of space in an index. Creating the VTOC and index VTOC To initialize a volume (preparing for I/O activity), use the Device Support Facilities (ICKDSF) utility to initially build the VTOC. You can create a index VTOC at that time, by using the ICKDSF INIT command and specifying the INDEX keyword. You may use ICKDSF to convert a non-indexed VTOC to an indexed VTOC by using the BUILDIX command and specifying the IXVTOC keyword. The reverse operation can be performed by using the BUILDIX command and specifying the OSVTOC keyword. See the ICKDSF R16 User. s Guide, GC , for details and refer to DFSMS/MVS DFSMSdfp Advanced Services, SC
62
Storage Management policies
63
ACS routines //DS1 DD DSN=CALC.D3, DISP=(NEW,CATLG), SMS ACS
Data Class? Storage Class? Management Class? Storage Group?
64
Traditional DASD Traditional DASD
Traditional DASD means 3380 and 3390 type of devices. The more modern IBM DASD products such as RAMACs, RVA, and Enterprise Storage Server (Shark) including OEM DASD emulates IBM 3380 and 3390 volumes in the geometry, capacity of track, and number of tracks per cylinder.
65
Redundant Array of Independent Disks (RAID)
Redundant array of independent disks (RAID) is a direct access storage architecture where data is recorded across multiple physical disks with parity separately recorded so that no loss of access to data results from the loss of any one disk in the array. The RAID concept involves many little small computer system interface (SCSI) disks replacing a big one. The major RAID advantages are: · Performance (due to parallelism),· Cost (SCSI commodities),· S/390 compatibility,· Environment (space and energy) However, RAID increased the chances of malfunction due to media and disk failures and the fact that the logical device is now residing on many physical disks. The solution was redundancy, which wastes space and cause performance problems as “write penalty” and “free space reclamation.” To address this performance issue, large caches are implemented. Various implementations certified by RAID Architecture Board are: RAID-1 Has just disk mirroring like dual copy. RAID-3 Has an array with one parity device and just one I/O request at time with intra-record striping. The access arms move together. It has a high data rate and a low I/O rate. RAID-5 Has an array with one distributed parity and has four HDAs in an RAMAC-3 array. It does I/O requests in parallel with extra-record striping. The access arms move independently. It has strong caching to avoid “write penalties”; that is four I/Os per write. RAID-5 has a high I/O rate and a medium data rate. RAID-5 does the following: · Reads data from an undamaged HDA is just one single I/O operation. · Reads data from a damaged HDA which implies (n-1) I/Os, where n is the number of HDAs in the array operation. · For every write to an undamaged HDA, RAID-5 does four I/O operations in order to store a correct parity block (write penalty). This penalty can be relieved with strong caching and a slice triggered algorithm (coalescing updates into a single parallel I/O). · For every write to a damaged HDA, RAID-5 does n-1 reads and one parity write. RAID-6 RAID-6 has an array with two distributed parity and I/O requests in parallel with extra-record striping. Its access arms move independently (Reed/Salomon P-Q parity). The write penalty is greater than RAID-5 with six I/Os per write. RAID-6+ RAID-6+ is without write penalty (due to log-structured file, LFS), and has background free-space reclamation. The access arms all move together for writes. RAID-10 RAID-10 has a new RAID architecture designed to give performance for striping and has redundancy for mirroring. Note: Data striping is called RAID-0, but it is not a real RAID because of no redundancy.
66
Introduction to tape processing
Tape are volumes that can be physically moved. You can store just sequential data sets on tape. Tape volumes can be sent to a safe or to other data processing centers. Internal labels are used to identify magnetic tape volumes and the data sets on those volumes. You can process tape volumes with: · IBM standard labels, · Labels that follow standards published by: - International Organization for Standardization (ISO),- American National Standards Institute (ANSI),- Federal Information Processing Standard (FIPS) · Nonstandard labels · No labels. Your installation can install a bypass for any type of label processing; however, the use of labels is recommended as a basis for efficient control of your data. IBM standard tape labels consist of volume labels and groups of data set labels. The volume label, identifying the volume and its owner, is the first record on the tape. The data set label, identifying the data set and describing its contents, precedes and follows each data set on the volume: · The data set labels that precede the data set are called header labels.,· The data set labels that follow the data set are called trailer labels. They are almost identical to the header labels.,· The data set label groups can include standard user labels at your option. Usually, the formats of ISO and ANSI labels, which are defined by the respective organizations, are similar to the formats of IBM standard labels. Nonstandard tape labels can have any format and are processed by routines you provide. Unlabeled tapes contain only data sets and tapemarks.
67
Describing the labels In the job control statements, you must provide a data definition (DD) statement for each data set to be processed. The LABEL parameter of the DD statement is used to describe the data set.s labels. You specify the type of labels by coding one of the following subparameters of the LABEL parameter as shown in table: Code Meaning SL IBM Standard Label AL ISO/ANSI/FIPS labels SUL Both IBM Standard and user header or trailer labels AUL Both ISO/ANSI/FIPS and user header or trailer labels NSL Nonstandard labels NL No labels, but the existence of a previous label is verified BLP Bypass label processing. The data set is treated in the same manner as if NL had been specified, except that the system does not check for an existing volume label. The user is responsible for the positioning. If your installation does not allow BLP, the data set is treated exactly as if NL had been specified. Your job can use BLP only if Job Entry Subsystem (JES) through Job class, RACF through Tapevol class, or DFSMSrmm(*) allow it. LTM Bypass a leading tapemark, if encountered, on unlabeled tapes from VSE. If you do not specify the label type, the operating system assumes that the data set has IBM standard labels.
68
Tape capacity The capacity of a tape depends on the device type that is recording it and 3490 tapes are physically the same cartridges. The IBM 3590 high performance cartridge tape is not compatible with the 3480, 3490, or 3490E drives units can read 3480 cartridges, but cannot record as a 3480, and 3480 units cannot read or write as a Table 4 lists all IBM tape capacities supported since 1952.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.