Presentation is loading. Please wait.

Presentation is loading. Please wait.

SWG Competitive Project Office

Similar presentations


Presentation on theme: "SWG Competitive Project Office"— Presentation transcript:

1 SWG Competitive Project Office
Introduction to IBM’s z/OS The Operating System for System z

2 Defining characteristics of z/OS
Uses address spaces to ensure isolation of private areas Ensures data integrity, regardless of how large the user population might be. Can process a large number of concurrent batch jobs, with automatic workload balancing Allows security to be incorporated into applications, resources, and user profiles. Allows multiple communications subsystems at the same time Provides extensive recovery, making unplanned system restarts very rare. Can manage mixed workloads Can manage large I/O configurations of 1000s of disk drives, automated tape libraries, large printers, networks of terminals, etc. Can be controlled from one or more operator terminals, or from application programming interfaces (APIs) that allow automation of routine operator functions. 64 BIT Virtual Address Space The use of address spaces in z/OS holds many advantages: Isolation of private areas in different address spaces provides for system security, yet each address space also provides a common area that is accessible to every address space. The system is designed to preserve data integrity, regardless of how large the user population might be. z/OS prevents users from accessing or changing any objects on the system, including user data, except by the system-provided interfaces that enforce authority rules. The system is designed to manage a large number of concurrent batch jobs, with no need for the customer to externally manage workload balancing or integrity problems that might otherwise occur due to simultaneous and conflicting use of a given set of data. The security design extends to system functions as well as simple files. Security can be incorporated into applications, resources, and user profiles. The system allows multiple communications subsystems at the same time, permitting unusual flexibility in running disparate communications-oriented applications (with mixtures of test, production, and fall-back versions of each) at the same time. For example, multiple TCP/IP stacks can be operational at the same time, each with different IP addresses and serving different applications. The system provides extensive software recovery levels, making unplanned system restarts very rare in a production environment. System interfaces allow application programs to provide their own layers of recovery. These interfaces are seldom used by simple applications—they are normally used by more sophisticated applications. The system is designed to routinely manage very disparate workloads, with automatic balancing of resources to meet production requirements established by the system administrator. The system is designed to routinely manage large I/O configurations that might extend to thousands of disk drives, multiple automated tape libraries, many large printers, large networks of terminals, and so forth. The system is controlled from one or more operator terminals, or from application programming interfaces (APIs) that allow automation of routine operator functions. The operator interface is a critical function of z/OS. It provides status information, messages for exception situations, control of job flow, hardware device control, and allows the operator to manage unusual recovery situations. zCPO zClass Introduction to z/OS

3 What’s an Address Space?
An execution environment in z/OS Remember z/OS runs in an LPAR (it is like a distributed server (a box on the floor)) How many Address Spaces can there be? THOUSANDS User Address Spaces are unique and run single applications Multiple Units of Work can be active within the address space (parallel execution) These Units of work are called TASKs User Address spaces do not communicate with each other If one address space fails the other user address spaces continue to run System Address Spaces Execute System Components (elements), e.g. DB2, CICS, SMF, RMF, DFSMS … (More coming) These Components are called Subsystems (like a system within a system) System Components Communicate with each other Cloned or Duplicate Address Spaces running as a Subsystem communicate with each other Multiple Address spaces of a Subsystem and as a Component act as one If one address space fails, the Component, e.g. Running DB2 continues to execute This enables continuous platform availability zCPO zClass Introduction to z/OS

4 What’s in an address space?
z/OS provides each user with a unique address space and maintains the distinction between the programs and data belonging to each address space Because it maps all of the available addresses, however, an address space includes system code and data as well as user code and data. Thus, not all of the mapped addresses are available for user code and data The ‘size’ of an Address Space Depends on the addressing range of the Hardware Architecture of the server In this example: 16 MB Address Space 2 GB Address Space z/OS can run in different addressing modes 24 bit Mode (16 MB) 31 bit Mode (2 GB) 64 bit Mode (16 Exabytes) zCPO zClass Introduction to z/OS

5 Examples z/OS address spaces
z/OS and its related subsystems require address spaces of their own to provide a functioning operating system: System address spaces are started after initialization of the master scheduler. These address spaces perform functions for all the other types of address spaces that start in z/OS. Subsystem address spaces for major system functions and middleware products such as DB2, CICS, and IMS. TSO/E address spaces are created for every user who logs on to z/OS Address spaces for every batch job that runs on z/OS. zCPO zClass Introduction to z/OS

6 The z/OS system structure
System Task TCP/IP VTAM JES DB2 WebSphere Lotus Notes User BMP DOR Batch Job Batch Job Batch Job User BMP AOR User MPP TOR A d d r e s s S p a c e s User MPP AOR TSO IMS CR CICS Base Operating System LIC (LPAR, etc) Let me talk a moment about the basic structure of the operating system and how it looks internally. At the lowest level of the internal system architecture is the zSeries hardware – the processor complex. We’ve already talked about that, and we’ve already discussed the next layer, PR/SM and LPARs, which is implemented in the hardware via Licensed Internal Code (LIC). On top of the LPAR runs the base z/OS operating system. Each unit of work on z/OS runs in something called an Address Space. Each system task is an address space, TCP/IP, JES, DB2, etc. all run in address spaces. Some address spaces are referred to as started tasks (system tasks that are started by an operator), some address spaces are batch jobs, some are TSO users, and others are IMS or CICS regions (We’ll talk more about what IMS and CICS are in the next Web Lecture). But at the most granular level, each unit of work is an address space. The Job Entry Subsystem manages the flow of jobs, users, etc. in and out of z/OS, which takes care of creating these address spaces. Within the address space there are tasks that are dispatched, and those tasks execute instructions on the CPU engines. The operating system takes care of that dispatching job, with the help of the Workload Manager. Each address space has the ability of addressing a certain amount of storage, defined by the width of the address in the architecture. In the zArchitecture, the word width is 64 bits, which comes to about 16 exabytes, although zSeries hardware does not yet support this much real memory. That difference between real memory size and addressability is handled by the virtual storage features of the operating system. zSeries hardware Address space addressability – 64-bit in z/OS 24 bit in MVS/370, 31 bit in MVS/XA –> OS/390 zCPO zClass Introduction to z/OS

7 PR/SM (Hypervisor Firmware)
Do you speak zSeries? DASD PR/SM (Hypervisor Firmware) z/OS Test Development Production Linux L I N U X z/VM Z / O S SAP CP IFL ICF zAAP Channel Subsystem Processor Units Logical Partitions CU Control Virtual Machines Channels Service Assist Integrated Facility for Linux Central Processors zCPO zClass Introduction to z/OS

8 zCPO zClass Introduction to z/OS
What’s Virtual Memory? Virtual Memory The hardware addressing capability of the architecture. Most likely the main storage (central storage) will be less than the virtual storage. E.g. 512 GB main storage vs 16 Exabytes virtual storage 239 vs 264 Where’s it all go? Page = 4K virtual address range Frame = 4K real address range Slot = 4K disk storage space A Page can exist in a Frame or in a Slot. It must be in a frame for data and instructions to be accessed. The location of the page is kept in tables created and maintained by the operating system. Each JOB has its own distinct tables. The pointer to the tables is part of the state data. The operating system has storage managers to manage the pages, frames and slots (VSM, RSM and ASM – Virtual, Real and Auxiliary Storage Managers). z10 Has 1 MB Segments zCPO zClass Introduction to z/OS

9 Address Space Concepts &Terminology
Virtual Storage Concepts ADDRESS SPACE – (in z/OS context) The range of virtual addresses to be used by a single job, user, system component. Because of the table structure there is an aspect of isolation and thus integrity between user address spaces. PAGING – Moving pages between frames and slots PAGE in and PAGE out PAGE STEALING – Marking pages to be paged out to free up frames. WORKING SET – Number of pages in memory in use for an address space PAGE TRIMMING – Pages are stolen from different address spaces, thus effectively trimming the working set size (WSS) of the address spaces. SWAPPING Moving an address space’s working between frames and slots. (SWAP in and SWAP out). Used to free up resources when tasks go into long waits (e.g. for I/O or resource acquisition). DEMAND Paging – bring a page into a frame from a slot when an instruction is executed to access the address in the page. PAGE FAULT – Accessing a virtual address that is not in memory. Case 1 – in a slot on the page dataset. Case 2 – invalid address; this results in an error. zCPO zClass Introduction to z/OS

10 The address space concept
With the release of zSeries mainframes in 2000, IBM extended the addressability of the architecture to 64 bits. With 64-bit addressing, the potential size of a z/OS address space expands to a size so vast that we need new terms to describe it. Each address space, called a 64-bit address space, is 16 exabytes (EB) in size; an exabyte is slightly more than one billion gigabytes. The new address space has logically 264 addresses. It is 8 billion times the size of the former 2 GB address space. The number is 16 with 18 zeros after it: 16,000,000,000,000,000,000 bytes, or 16 EB (see the slide). We say that the potential size is 16 exabytes because z/OS, by default, continues to create address spaces with a size of 2 GB. The address space exceeds this limit only if a program running in it allocates virtual storage above the 2 GB address. If so, the z/OS operating system increases the storage available to the user from 2 GB to 16 EB. The 16 MB address became the dividing point between the two previous architectures (the 24-bit addressability introduced with MVS/370 and the 31-bit addressing introduced in the operating system MVS Extended Architecture or MVS/XA), and is commonly called the line. The area that separates the virtual storage area below the 2 GB address from the user private area is called the bar. zCPO zClass Introduction to z/OS

11 Mapping of z/OS addressability
The 16 MB address became the dividing point between the two previous architectures (the 24-bit addressability introduced with MVS/370 and the 31-bit addressing introduced in the operating system MVS Extended Architecture or MVS/XA), and is commonly called the line. The area that separates the virtual storage area below the 2 GB address from the user private area is called the bar. With the release of zSeries mainframes in 2000, IBM further extended the addressability of the architecture to 64 bits. With 64-bit addressing, the potential size of a z/OS address space expands to a size so vast that we need new terms to describe it. Each address space, called a 64-bit address space, is 16 exabytes (EB) in size; an exabyte is slightly more than one billion gigabytes. The new address space has logically 264 addresses. It is 8 billion times the size of the former 2 GB address space. The number is 16 with 18 zeros after it: 16,000,000,000,000,000,000 bytes, or 16 EB (see the slide). zCPO zClass Introduction to z/OS

12 How virtual storage works
Virtual storage is divided into 4-kilobyte pages Transfer of pages between auxiliary storage and real storage is called paging When a requested address is not in real storage, an interruption is signaled and the system brings the required page into real storage z/OS uses tables to keep track of pages Dynamic address translation (DAT) Frames, pages, slots are all repositories for a page of information Main Memory Identification Division * Data Division Working Storage Section 77 abc-sw pic xx. Procedure Division Open File-A, File B Move FIELD-A to FIELD-B Close File-A, FILE-B. STOP RUN For the processor to execute a program instruction, both the instruction and the data it references must be in real storage. The convention of early operating systems was to have the entire program reside in real storage when its instructions were executing. However, the entire program does not really need to be in real storage when an instruction executes. Instead, by bringing pieces of the program into real storage only when the processor is ready to execute them—moving them out to auxiliary storage when it doesn’t need them, an operating system can execute more and larger programs concurrently. The operating system can divide a program into pieces assign each piece a unique address. This arrangement allows the operating system to keep track of these pieces. In z/OS, the program pieces are called pages. z/OS uses tables to determine whether a page is in real or auxiliary storage, and where. To find a page of a program, z/OS checks the table for the virtual address of the page, rather than searching through all of physical storage for it. z/OS then transfers the page into real storage or out to auxiliary storage as needed. This movement of pages between auxiliary storage slots and real storage frames is called paging. Paging is key to understanding the use of virtual storage in z/OS. Dynamic address translation, or DAT, is the process of translating a virtual address during a storage reference into the corresponding real address. Physical storage is divided into areas, each the same size and accessible by a unique address. In real storage, these areas are called frames; in auxiliary storage, they are called slots. zCPO zClass Introduction to z/OS

13 zCPO zClass Introduction to z/OS
Elements z/OS consists of a collection of functions that are called base elements and optional elements Some the base elements can be dynamically enabled and disabled Customer may choose to use a vendor product instead of IBM products. Optional Elements are called features Customers can select features they want shipped with the operating system The optional elements (features) are either integrated or nonintegrated. Features, both integrated and nonintegrated, are also tested as part of the integration of the entire system. zCPO zClass Introduction to z/OS

14 Elements of z/OS - Base and Optional
Some Base Elements Base Control Program (BCP) Bulk Data Transfer base (BDT) BookManager Read Communications Server Cryptographic Services DFSMSdfp Distributed File Service EREP ESCON Director Support FFST HCD High Level Assembler (HLASM) IBM HTTP Server IBM Tivoli Directory Server for z/OS ICKDSF Integrated Security ServicesHigh IBM HTTP Server IBM Tivoli Directory Server for z/OS ICKDSF Integrated Security Services ISPF JES2 Language Environment Library Server MICR/OCR Network File System (NFS) OSA/SF Run-Time Library Extensions SMP/E TIOC TSO/E z/OS UNIX 3270 PC File Transfer Program zCPO zClass Introduction to z/OS

15 zCPO zClass Introduction to z/OS
Optional Elements BDT File-to-File BDT SNA NJE BookManager BUILD C/C++ without Debug Tool Communications Server Security Level 3 DFSMSdss DFSMShsm DFSMSrmm DFSMStvs DFSORT GDDM-PGF GDDM-REXX HCM HLASM Toolkit Infoprint Server JES3 RMF SDSF Security Server z/OS Security Level 3 zCPO zClass Introduction to z/OS

16 The BCP – Base Control Program
Essential operating system services Base control program and job entry subsystem (JES) BCP requires the following: A security product (RACF is the IBM offering) DFSMSdfp Communications Server SMP/E TSO/E z/OS UNIX System Services (z/OS UNIX) kernel Important BCP components System management facilities (SMF) Resource Management Facility (RMF) Workload manager (WLM) Interesting optional features  Infoprint Server I/O configuration program (IOCP), Program management binder Support for the Unicode Standard. z/OS XML System Services (z/O zCPO zClass Introduction to z/OS

17 ..Just a little more on this
Optional features are unpriced or priced: Unpriced features are shipped to you only if you order them. Priced features are always shipped. These features are ready to use after you install z/OS IBM enables the priced features ordered and disables the priced features not ordered Disabled features can be enabled (dynamic enablement) Dynamic enablement is done by updating SYS1.PARMLIB member IFAPRDxx; notify IBM by contacting your IBM representative. Some optional features that support dynamic enablement are always shipped, e.g. JES3, DFSMSdss, and DFSMShsm. If these features are ordered as part of the z/OS system order, they are shipped as enabled in the system. If they are not ordered, they are shipped as disabled. Later on you can enable them through a SYS1.PARMLIB member. The other type of features are the optional features equivalent to optional program products; e.g. RACF from the Security Server set of programs, RMF, C/C++ compiler zCPO zClass Introduction to z/OS

18 Running in the Address Spaces
User Applications Batch Jobs MiddleWare DB2, CICS ISV Applications Application Servers (WebSphere) WebServers TSO Users Unix Users Unix System Services (USS) System Applications TCP/IP Stack… RACF (Resource Access Control Facility) z/OS Security Manager zCPO zClass Introduction to z/OS

19 Who Makes an Address Space
When z/OS is “Booted” (really IPLed (Initial Program Load)) a component call the Master Scheduler is built as the 1st address space. The Master Scheduler creates other address spaces as needed. When a TSO User Logs on When A USS User Logs on When A System Task Is started When JES is Started When JES Initiators are Started (they pull jobs off the JES Queues) More examples follow SMF – System Management Facility RMF – Resource Management Facility DFSMS – Data Facility Storage Management Subsystem zCPO zClass Introduction to z/OS

20 What Type of System Applications?
GRS – Global Resource Serialization Controls Access to Resources RACF – Resource Access Control Facility Provides Security Services WLM – Workload Manager Dynamically sends work to resources & resources to white space JES – Job Entry Subsystem Queues up work for entry into the z/OS Queues up output for sending work to printers SMF – System Management Facilty Gathers messages from system applications and writes them to disk. Performance data, events …. RMF – Resource Measurement Facility Provides reports on system and application activity Graphical real time operating system data These Components and subsystems communicate with each other …. across address spaces. zCPO zClass Introduction to z/OS

21 zCPO zClass Introduction to z/OS
SMF – Part of BCP SMF – System Message Recording Components write messages to SMF, SMF writes messages to a dataset Every message has a specific record id associated with it Record formats are different The data is post processed System programmers configure what messages are/are not written There are two SMF datasets – One is hot and when the dataset if full, automation or the operator switches to the standby data set and them dumps the data of the full data set The data is used for various purposes Performance analysis Workload management behavior Resource consumption I/O, Memory, CPU Error Analysis z/OS architects / developers use a system service to write SMF records Developers determine if, where, and when in the code a record is written There is an SMF developer/designer that assigns the record id and reviews the record format/structure and content Data and record collection is event driven, e.g. Start and stop of a job Reason codes indicating why job was stopped Open and close of a dataset Count of I/O records read/written zCPO zClass Introduction to z/OS

22 zCPO zClass Introduction to z/OS
RMF Resource Monitoring Facility RMF is an optional priced feature of z/OS. It is a product that supports Performance Analysis, Capacity Planning, and Problem Determination. For these disciplines, different kinds of data collectors are available: Monitor I is the long term data collector for all types of resources and workloads. The SMF data collected by Monitor I is mostly used for capacity planning but also for performance analysis. Monitor II is the snap shot data collector for address space states and resource usage. Some of the gathered data is also displayed in SDSF Monitor III is the short-term data collector for problem determination, workflow delay monitoring, and goal attainment supervision. The MIII data is also used by RMF PM Java Client, the RMF Web Browser interface, and Tivoli TBSM The data collected by all three gatherers can be saved persistently for later reporting. Monitor II and Monitor III are online reporters. Monitor I and Monitor III can store the collected data to datasets zCPO zClass Introduction to z/OS

23 RMF Architecture Overview
Open Standards (WBEM/CIM) Snapshot Reporting RMF Sysplex Data Server and APIs Historical Reporting Analysis and Planning Real-time Reporting Problem Determination and Data Reduction Long-term Analysis RMF Postprocessor Online Monitoring RMF Monitor III RMF is an optional priced feature of z/OS. It is a product that supports Performance Analysis, Capacity Planning, and Problem Determination. For these disciplines, different kinds of data collectors are available: Monitor I is the long term data collector for all types of resources and workloads. The SMF data collected by Monitor I is mostly used for capacity planning but also for performance analysis. Monitor II is the snap shot data collector for address space states and resource usage. Some of the gathered data is also displayed in SDSF Monitor III is the short-term data collector for problem determination, workflow delay monitoring, and goal attainment supervision. The MIII data is also used by RMF PM Java Client, the RMF Web Browser interface, and Tivoli TBSM. The data collected by all three gatherers can be saved persistently for later reporting. Monitor II and Monitor III are online reporters. Monitor I and Monitor III can store the collected data to datasets SMF SMF RMF RMF RMF Monitor III RMF Monitor II Data Gatherer Monitor I background VSAM VSAM zCPO zClass Introduction to z/OS

24 Postprocessor: Standard Reporting
C P U A C T I V I T Y z/OS V1R SYSTEM ID SYS START 10/14/ RPT VERSION V1R2 RMF END 10/14/ CPU MODEL 107 CPU ONLINE TIME LPAR BUSY MVS BUSY CPU SERIAL I/O TOTAL % I/O INTERRUPTS NUMBER PERCENTAGE TIME PERC TIME PERC NUMBER INTERRUPT RATE HANDLED VIA TPI TOTAL/AVERAGE //RMFPP EXEC PGM=ERBRMFPP //SYSIN DD * DATE( , ) RTOD(1100,1300) DINTV(0030) REPORTS(CPU) SYSRPTS(WLMGL(SCPER)) SYSOUT(H) W O R K L O A D A C T I V I T Y PAGE 22 z/OS V1R SYSPLEX SYSPLEX START 10/14/ INTERVAL MODE = GOAL RPT VERSION V1R2 RMF END 10/14/ REPORT BY: POLICY=STANDARD WORKLOAD=SYSTEM SERVICE CLASS=SYSTEM RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=SYSTEM TRANSACTIONS TRANS.-TIME HHH.MM.SS.TTT --DASD I/O SERVICE SERVICE RATES-- PAGE-IN RATES STORAGE---- AVG ACTUAL SSCHRT IOC K ABSRPTN SINGLE AVG MPL EXECUTION RESP CPU K TRX SERV BLOCK TOTAL ENDED QUEUED CONN MSO M TCB SHARED CENTRAL END/S R/S AFFINITY DISC SRB K SRB HSP EXPAND #SWAPS INELIGIBLE Q+PEND TOT M RCT HSP MISS 0.0 EXCTD CONVERSION IOSQ /SEC K IIT EXP SNGL SHARED AVG ENC STD DEV HST EXP BLK REM ENC APPL % EXP SHR MS ENC Just a sample for a PP report system scope and sysplex scope zCPO zClass Introduction to z/OS

25 zCPO zClass Introduction to z/OS
Middleware z/OS runs middle ware applications and packages MiddleWare is usually a product, i.e. it costs the customer $ It may not be an IBM product Common Middleware is: DB2, CICS, IMS, WebSphere Products z/OS provides some interfaces for vendors to use Called the Subsystem Interface (SSI) Also used by z/OS components CICS – Customer Information Control System DB2 – IBM’s Relational Database IMS – Information Management Subsystem Transaction Monitor IBM’s J2EE Websphere Application Server zCPO zClass Introduction to z/OS

26 zCPO zClass Introduction to z/OS
z/OS Software Stack Security (RACF 1.8) Systems Management Application SAP, Siebel, JDEdwards, and customer applications CICS 3.2, WebSphere 6.1, IMS 10 Middleware (CICS, IMS, WebSphere) Database (DB2, IMS) DB2 9, IMS 10 z/OS z/OS 1.8 z9 Processor z9 109 zCPO zClass Introduction to z/OS

27 Transactions and Data – the zSeries Application “Sweet Spot”
Transaction monitor – manages a transaction A program or subsystem that manages or oversees the sequence of events that are part of a transaction Makes sure the ACID properties of a transaction are maintained Includes functions such as interfacing to databases and networks and transaction commit/rollback coordination Provides an API so applications can exploit the services of the transaction monitor IBM’s z/OS-based transaction monitors: IMS - Information Management System CICS - Customer Information Control System WebSphere Application Server for z/OS A key strength of the z/OS platform is support for high-volume, high-performance transaction management using transaction monitors A transaction monitor is a program or a subsystem that manages and/or oversees the sequence of events that are part of the transaction. The monitor ensures that the ACID properties that we discussed on the previous slide are maintained. Usually, a Transaction Monitor takes care of all of the data, networking and user interface interactions in the transaction. Since these items affect whether the ACID properties are maintained, the TM manages them. Also, the TM usually provides an application programming interface that can be used by applications to invoke the TM functions. This is usually a set of program calls that can be used in programming languages such as COBOL, PL/I and Java. Applications written in a TM environment tend to be short-lived, high-volume applications – like ATM withdrawals, unlike other kinds of applications such as data mining or batch processing that involve lots of data and long durations. On z/OS, there are three transaction monitors sold by IBM: Information Management System, or IMS, the Customer Information Control System – CICS, and WebSphere for z/OS. All three of these perform the same basic transactional functions, but are implemented in different ways, use different APIs, and have different internal architectures. But all three leverage the strengths of the z/OS and zSeries platform to perform transaction processing in a very scalable, reliable, and high-performance fashion. Also, there are other TP monitors on z/OS, provided by third parties. IDMS is a product from Computer Associates that provides both database management and TP monitor facilities. And, BEA provides their WebLogic application server on the z/OS platform, just as we do with WebSphere. So, let me reiterate – transaction processing is a key strength of the z/OS platform. Our large mainframe customers depend on z/OS to deliver their business-critical transactions using these subsystems, and the strengths of the platform allow this to be done on a high-volume, high-performing platform. zCPO zClass Introduction to z/OS

28 IMS – Information Management System
“IMS Runs the World” since 1968: Most Corporate Data is Managed by IMS Over 95% of Fortune 1000 Companies use IMS IMS Manages over 15 Billion GBs of Production Data $2 Trillion/day transferred thru IMS by one customer Over 50 Billion Transactions a Day run through IMS IMS serves close to 200 Million users per day Over 79 million IMS trans/day handled by one customer on a single production Sysplex, 30 million trans/day on a single CEC 120M IMS trans/day, 7M per hour handled by one customer 4000 trans/sec (250 million/day) across TCP/IP to a single IMS Over 3000 days without an outage at one large customer 21,000 transactions per second on a single z990, with 4 IMS servers First of all, let’s look at IMS and some statistics about it. IMS has been around since 1968, and is run in over 95% of Fortune 1000 corporations. The database manager in IMS manages over 15 billion gigabytes of data, and there are more than 50 billion IMS transactions processed each day world-wide. You can see from some of these other numbers, such as the stat showing 4000 transactions per second on a single IMS system at one account, that IMS is capable of handling the largest transaction processing requirements, and with incredible reliability and performance. And these numbers are getting bigger and better. Just recently, with IMS Version 9, IBM benchmarked performance of over 21,000 IMS transactions per second on a single z990 processor, using 4 IMS server instances. That means that a single IBM eServer could process almost 2 Billion transactions in a day. zCPO zClass Introduction to z/OS

29 CICS – Customer Information Control System
30+ years of applications >30B transactions per day 5000 packages/2000 ISVs 30M CICS users 50K CICS/390 licenses, 16K customers 950,000 CICS application programmers “it’s the programming model!” 490 of IBM’s top 500 customers What is it? CICS provides an execution environment for concurrent program execution for multiple end users, who have access to multiple data types. CICS will manage the operating environment to provide performance, scalability, security, and integrity The Customer Information Control System, or CICS, like IMS, has a very large user community and transaction load across the IBM customer community. It too has been around for several decades, although CICS is a wee bit newer than IMS. There are more than 30B transactions processed each day on CICS, and there are thousands of third party ISV applications available for CICS. Almost every single one of IBM’s largest 500 customers use CICS. And, an incredible statistic – there are over 950K CICS application developers. The success of CICS is due in large part to the ease of use of the CICS API and the huge number of developers using it. Well, as you would expect, CICS, as a transaction processing monitor, does many of the same things that IMS does – it provides a multi-user execution environment for transactions that access multiple types of data. And, like IMS< CICS manages the operating environment to provide a high-performance, scalable and secure run-time for the transactions to run with the ACID properties we discussed before. zCPO zClass Introduction to z/OS

30 WebSphere Application Server for z/OS, the Java Transaction Manager
Architected on SOA infrastructure & principles Fully J2EE 1.4 platform certified Leading Web Services support WebSphere Rapid Development & Deployment zAAP enabled (z9-109, z990, z890) Run Java applications next to mission critical data Lower the cost of computing for WebSphere Application Server (and all z/OS based Java applications) Common code infrastructure Administration skills shared between platforms Develop anywhere, run on WebSphere Application Server for z/OS Native OS support – leverages the z/OS platform Optimization features designed to provide security and data interaction, including CICS, IMS, DB2 DB2 EJB Container EJBs Web Container JSP Servlets WAS z/OS Browser zAAP Web Service Requestor Client SOA Value Proposition: Evolve development projects based on existing investments - New services can be created to maximize return on investment Continuous business process improvements - Adoption of component-based services simplifies business processes and flows of information Standardization and simplification across organizations - Standardized interfaces and processes promote greater efficiencies and reduces the risk of failure of complex projects Alignment of IT with business needs - Consolidation of IT components leads to a more parallel alignment of IT and line-of-business objectives Reduction of costs and faster time-to-market Customer status: While no customers are in production yet with V6.0.1 (Product GA March 25th), the beta produced more referenceable customers than any past betas. Currently, 4 beta customers are willing to be references from a total of 6. Those include Aurora (plan to move to V6 later this year), Daimler-Chrysler (already talking to other customers), and VPS. zCPO zClass Introduction to z/OS

31 A Mainframe Runs Mixed Workloads
Typical large customer daily activity zCPO zClass Introduction to z/OS

32 How virtual storage works (continued…)
An address is an identifier of a required piece of information, but not a description of where in real storage that piece of information is. This allows the size of an address space (that is, all addresses available to a program) to exceed the amount of real storage available. All real storage references are made in terms of virtual storage addresses. A hardware mechanism is used to map the virtual storage address to a physical location in real storage. As shown on the slide, the virtual address can exist more than once, because each virtual address maps to a different address in real storage. When a requested address is not in real storage, a hardware interruption is signaled to z/OS and the operating system brings the required instructions and data into real zCPO zClass Introduction to z/OS

33 Dynamic Address Translation
Region Tables 64 Bit Address' zCPO zClass Introduction to z/OS

34 zCPO zClass Introduction to z/OS
Pages, Frames, and Slots The pieces of a program executing in virtual storage must be moved between real and auxiliary storage: A block of real storage is a frame. A block of virtual storage is a page. A block of auxiliary storage is a slot. A page, a frame, and a slot are all the same size: 4096 bytes (4 kilobytes). To the programmer, the entire program appears to occupy contiguous space in real storage at all times. The slide shows z/OS performing paging for a program running in virtual storage. The lettered boxes represent parts of the program. In this simplified view, program parts A, E, F, and H are active and running in real storage frames, while parts B, C, D, and G are inactive and have been moved to auxiliary storage slots. All of the program parts, however, reside in virtual storage and have virtual storage addresses. zCPO zClass Introduction to z/OS

35 zCPO zClass Introduction to z/OS
Page Stealing z/OS tries to keep an adequate supply of available real storage frames on hand. When this supply becomes low, z/OS uses page stealing to replenish it. Pages that have not been accessed for a relatively long time are good candidates for page stealing. z/OS also uses various storage managers to keep track of all pages, frames, and slots in the system. zCPO zClass Introduction to z/OS

36 zCPO zClass Introduction to z/OS
Swapping Swapping is one of several methods that z/OS uses to balance the system workload and ensure that an adequate supply of available real storage frames is maintained. Swapping has the effect of moving an entire address space into, or out of, real storage: A swapped-in address space is active, having pages in real storage frames and pages in auxiliary storage slots. A swapped-out address space is inactive; the address space resides on auxiliary storage and cannot execute until it is swapped in. “Working set” is the number of active pages in an application. The inactive pages have been paged out, leaving the active pages in memory. The number of applications that can fit comfortably in a system concurrently is a function of the working set of each. When an entire application is moved to auxiliary storage, it is “swapped out”. This means that all of the pages (including the working set) are moved to a separate swap data set. When resources are freed up, the application is brought back into memory. zCPO zClass Introduction to z/OS

37 zCPO zClass Introduction to z/OS
Data Space Management zCPO zClass Introduction to z/OS

38 Some More z/OS Address Spaces
Started Tasks M A S T E R P C A U T H R A S P T R A C E . . . C A T L O G C O N S L E A L O C S V L F L A J E S V T A M C I S T S O T S O U E R I N T / J O B TSO LOGON Batch Job System and Subsystem Address Spaces zCPO zClass Introduction to z/OS

39 DFSMS – The Premier Storage Management Suite
GOALS: Improve the use of the storage media; for example, by reducing out-of-space abends and providing a way to set a free-space requirement. Reduce the labor involved in storage management by centralizing control, automating tasks, and providing interactive or batch controls for storage administrators. Reduce the user's need to be concerned with the physical details of performance, space, and device management. Users can focus on using information instead of managing data. zCPO zClass Introduction to z/OS

40 DFSMSdfp - System Managed Storage (SMS)
GRP_1 System Managed Storage Allocation request Data Class GRP_2 Storage Class Management Class SMS automatise la gestion des mémoires externes: Gestion d’espace, Gestion des sauvegardes / reprises, Gestion des performances, Gestion de la disponibilité Automatisation de Choix des allocations (Volumes, Caractéristiques des fichiers) , Gestion des rétentions, Migrations, Sauvegardes, Sécurité, en se basant sur l’’espace disponible, les besoins de performance, les impératifs. ISMF (Interactive Storage Management Facility) est l’outil de travail de l’administrateur Data Class : paramètres d’allocation Storage Class : gestion SMS ou non-SMS Management Class : gestion d’espace (migration, back-up), d’expiration par DFSMShsm Storage Group : affectation physique des fichiers aux volumes dans leur groupe correspondant Storage Group GRP_3 Storage Groups zCPO zClass Introduction to z/OS

41 Mitigating Management Costs…
DFSMSdfp constructs are key to data placement and assigning goals, requirements, etc. The operating system and subsystems understand the three user specifiable constructs. DFSMShsm and ABARs are key to implementing the management policy. Coherent backups Data retirement DFSMSdss key to movement, copying datasets. DFSMSrmm key to tape management. zCPO zClass Introduction to z/OS

42 PSA CVT ASVT Control Blocks ASCB ASXB TCB Four Types
16 22C 4 8 6C RB CVT PTR ASVT NEXT ASCB PREV ASCB AXCB FIRST TCB LAST TCB RBP PIE DEB TIOT SAVE LLE * * * SPQE DQE FQE SCB CDE STAE DPQE PQE FBQE XL AREA SCA PIE PICA SPIE Control Blocks Control Block representation of an Address space Four Types Resource Related Control Blocks System Related Control Blocks Task Related Control Blocks Job Related Control Block zCPO zClass Introduction to z/OS

43 Workload Manager Constructs
zCPO zClass Introduction to z/OS

44 zCPO zClass Introduction to z/OS
WLM DOES THE FOLLOWING Monitors the use of resources by various address spaces Monitors the system-wide use of resources to determine whether they are fully utilized Determines which address space to “swap” out (and when) Inhibits the creation of new address spaces or steals pages when certain shortages of real storage exist Changes the dispatching priority of address spaces to adjust the consumption of system resources Selects the devices to be allocated, if a choice of devices exist to balance I/O devices zCPO zClass Introduction to z/OS

45 WLM Classification Rules
zCPO zClass Introduction to z/OS

46 zCPO zClass Introduction to z/OS
Transaction Flow Enclave GOAL Met ! GOAL Met ! zCPO zClass Introduction to z/OS

47 Mapping Unix to z/OSTerms and Concepts
zCPO zClass Introduction to z/OS

48 zCPO zClass Introduction to z/OS
z/OS Support Summary z9-109 z990 z890 z900 z800 G5/G5 Multiprise® 3000 End of Service Coexists with Ship Date 1.4 x 3/07 1.7 9/02 1.5 3/07* 1.8 3/04 1.6 9/07* 9/04 9/08* 1.9 9/05 1.8* 9/09* 1.10 9/06* A summary of z/OS support facts and policies. Key points: Current release and end of service date Releases they can migrate to in a single step migration (Coexistence/Migration Policy) Servers supported for that release. Note: Support for z/OS 1.4 is planned to end in September Customers should be planning to migrate to either z/OS 1.6 or 1.7 before this date. z/OS 1.7 is the last release that z/OS 1.4 customers can migrate to in a single step. *Planned z/OS.e – Available for z890 and z800 only zCPO zClass Introduction to z/OS

49 Summary of z/OS facilities
Address spaces and virtual storage for users and programs. Physical storage types available: real and auxiliary. Movement of programs and data between real storage and auxiliary storage through paging. Dispatching work for execution, based on priority and ability to execute. An extensive set of facilities for managing files stored on disk or tape. Operators use consoles to start and stop z/OS, enter commands, and manage the operating system. An address space describes the virtual storage addressing range available to an online user or a running program. Two types of physical storage are available: real storage and auxiliary storage (AUX). Real storage is also referred to as real memory or central storage. z/OS moves programs and data between real storage and auxiliary storage through a process called paging. z/OS dispatches work for execution. That is, it selects programs to be run based on priority and ability to execute and then loads the program and data into real storage. All program instructions and data must be in real storage when executing. An extensive set of facilities manages files stored on direct access storage device (DASD) or tape cartridges. Operators use consoles to start and stop z/OS, enter commands, and manage the operating system. z/OS provides operational facilities such as security, recovery, data integrity and workload management. zCPO zClass Introduction to z/OS

50 zCPO zClass Introduction to z/OS
Summary z/OS, the most widely used mainframe operating system, is ideally suited for processing large workloads for many concurrent users. Virtual storage is an illusion created by the architecture, in that the system seems to have more storage than it really has. Each user of z/OS gets an address space containing the same range of storage addresses. z/OS is structured around address spaces, which are ranges of addresses in virtual storage. Production systems usually include add-on products for middleware and other functions. An operating system is a collection of programs that manage the internal workings of a computer system. The operating system taught in this course is z/OS, the most widely used mainframe operating system. The z/OS operating system’s use of multiprogramming and multiprocessing, and its ability to access and manage enormous amounts of storage and I/O operations, makes it ideally suited for running mainframe workloads. The concept of virtual storage is central to z/OS. Virtual storage is an illusion created by the architecture, in that the system seems to have more storage than it really has. Virtual storage is created through the use of tables to map virtual storage pages to pages in real storage or slots in auxiliary storage. Only those portions of a program that are needed are actually loaded into real storage. z/OS keeps the inactive pieces of address spaces in auxiliary storage. z/OS is structured around address spaces, which are ranges of addresses in virtual storage. Each user of z/OS gets an address space containing the same range of storage addresses. The use of address spaces in z/OS allows for isolation of private areas in different address spaces for system security, yet also allows for inter-address space sharing of programs and data through a common area accessible to every address space. Programs running on z/OS and zSeries mainframes can run with 24-, 31-, or 64-bit addressing (and can switch among these if needed). Programs can use a mixture of instructions with 24-bit, 64-bit, or 32-bit operands, and can switch among these if needed. Mainframe operating systems seldom provide complete operational environments. They depend on program products for middleware and other functions. Many vendors, including IBM, provide middleware and various utility products. Middleware is a relatively recent term that can embody several concepts at the same time. A common characteristic of middleware is that it provides a programming interface, and applications are written (or partially written) to this interface. zCPO zClass Introduction to z/OS

51 SWG Competitive Project Office
Introduction to IBM’s System z Clustering Technologies Parallel Sysplex And LPAR Cluster

52 zCPO zClass Introduction to z/OS
Objectives In this session will learn about: Parallel Sysplex (z/OS and zSeries Clustering Technology Software and Hardware executing as one Server Multiple LPARs running as one Server z/OS running in each LPAR Up to 32 System Images (z/OS) running as a Parallel Sysplex Intelligent Resource Director (IRD) LPAR Clusters Exist within Parallel Sysplex Clustering Associated with Work Load management (WLM) managing Virtual Hardware Resources Explain how Parallel Sysplex can achieve continuous availability Explain dynamic workload balancing Explain the single system image zCPO zClass Introduction to z/OS

53 Five Nines is the Gold Standard of Availability
99.999% availability is sometimes referred to as “continuous operation” 5 minutes downtime per year out of 24x365 Survey of 28 companies with mixed environments Average mainframe system availability = % or 36 minutes per year downtime Average distributed server availability = % or 8 hours per year per server downtime Small improvements in the “nines” become more and more difficult to achieve Distributed system hardware and software design, test, and service strategy are required With % availability or about 5 minutes of downtime per year a properly configured Parallel Sysplex cluster can deliver near continuous availability. MF DIST Users per CPU 3, More sites more servers more complexity increases IT labor costs Utilization 76% % MF users lose $.24 pr $1 spent on hw $.37 on every $1. Downtime (hours per year) ` Users lose 13 times the productivity costs (salary X hours lost ) Hardware refresh Replacing hardware in year 4 for distributed causes MF costs to change from 35% more at the end of year 3 to 3% less at the end of year 4. Industry segment Cost Energy $2,818K Telecommunications $2,066K Manufacturing $1,611K Financial $1,495K Information Technology $1,345K Insurance $1,202K Retail $1,107K Pharmaceuticals $1,082K Banking $997K Consumer Products $786K Chemicals $704K Transportation $669K Downtime Mainframe Distributed Cost impact hours per year .6 (99.993% availability) 7.98 (99.909% availability) 13 times downtime costs March 12, 2007 IDC Survey of 28 customers with mixed environments zCPO zClass Introduction to z/OS

54 Gartner Ranks System z Tops in Availability (Parallel Sysplex)
Availability Rankings- Selected Platforms Best IBM System z MAINFRAME Gartner Criteria: Single system availability Planned downtime Disaster tolerance & recovery Failover clustering High availability services IBM Power5 UNIX Availability Ranking Sun Fire /Sparc IV HP 9000 Unisys ES7000 Dell Poweredge A recent GartnerGroup study of 240 observations from 190 firms shows that S/390 Parallel Servers coupled with Parallel Sysplex technologies lead the IT industry in availability. Parallel Sysplex technology outages were 10 minutes over a 24 by 365 period (99.998% availability). UNIX systems in the study averaged 23.6 hours of downtime per year. In the event of a hardware or software outage, either planned or unplanned, workloads can be dynamically redirected to available servers providing continuous application availability. 80% Scheduled downtime (new software release, upgrades, maintenance) 􀀗20% Unscheduled downtime (source: Gartner Group) 􀂃 40% Operator error 􀂃 40% Application error 􀂃 20% Other (Network failures, Disk crashes, Power outage etc.) Avoid application awareness of availability solutions HP Integrity WINTEL Worst "Platform" Source: Gartner, Server Scorecard Evaluation Model version 2, May 2006 zCPO zClass Introduction to z/OS

55 What is a parallel sysplex = Continuous Availability
Builds on the strength of zSeries servers by linking up to 32 images to create the industry’s most powerful commercial processing clustered system Innovative multi-system data-sharing technology Direct concurrent read/write access to shared data from all processing nodes No loss of data integrity, No performance hit Transactions and queries can be distributed for parallel execution based on available capacity and not restricted to a single node Every “cloned” application can run on every image Hardware and software can be maintained non-disruptively Within a parallel sysplex cluster, it is possible to construct an environment with no single point of failure Peer instances of a failing subsystem can take over recovery of resources held by the failing instance OR the failing subsystem can be automatically restarted on still healthy systems In a parallel sysplex the loss of a server may be transparent to the application and the server workload redistributed automatically with little performance degradation Software upgrades can be rolled through one system at a time on a sensible timescale for the business zCPO zClass Introduction to z/OS

56 zCPO zClass Introduction to z/OS
Consider the Power Parallel Sysplex – Up to 32 System Images z10 Server – Up to 64 Processors per image MIPS up to 920 per processor Up to 1,884,160 MIPS In a Parallel Sysplex zCPO zClass Introduction to z/OS

57 Z Series Continuous Availability
GDPS Single System Parallel Sysplex 10 9 11 12 1 2 3 9 10 11 12 1 8 7 6 5 4 2 3 3 2 12 1 11 10 8 9 7 6 5 4 4 5 6 7 8 1 to 32 Systems Site 1 Site 2 Built In Redundancy Capacity Upgrade on Demand Capacity Backup Hot Pluggable I/O Addresses Planned/Unplanned HW/SW Outages Flexible, Nondisruptive Growth Capacity beyond largest CEC Scales better than SMPs Dynamic Workload/Resource Management Addresses Site Failure/Maintenance Sync/Async Data Mirroring Eliminates Tape/Disk SPOF No/Some Data Loss Application Independent zCPO zClass Introduction to z/OS

58 zCPO zClass Introduction to z/OS
Horizontal Scaling and High Availability Parallel Sysplex Loosely coupled multiprocessing Hardware/software combination Requires: Data sharing Locking Cross-system workload dispatching Synchronization of time for logging, etc. High-speed system coupling Hardware: Coupling Facility Integrated Cluster Bus and ISC to provide high-speed links to CF Sysplex Timer – Time Of Day clock synchronization Implemented in z/OS* and subsystems Workload Manager in z/OS Compatibility and exploitation in software subsystems, including IMS*, VSAM*, RACF*, VTAM*, JES2*, etc. Rolling Maintenance System and Application Code Applications " SAME " System Z9 Common resesources continued…..( Let’s talk a bit about how to scale the zSeries complex horizontally. In computer architectures, clustering, or “loosely coupled multiprocessing” is frequently used to connect multiple independent systems together to act as a single system image. Some processor architectures do this well, and some not so well. The Parallel Sysplex technology, as implemented on the z/OS operating system, is one of the industry’s best clustering technologies. Parallel Sysplex is a combination of hardware and software technology that allows a cluster of zSeries processors running z/OS to work together as a single system image. To run as a single image, several functions are needed, including data sharing, locking, cross-system workload dispatching, time synchronization, and a high-speed connection between systems. To accomplish this, the zSeries hardware architecture provides hardware functions such as the Coupling Facility, a hardware device that is, in essence, a large shared memory unit, connected via high-speed links such as the Integrated Cluster Bus or ISC. Also needed is a shared clock to provide time synchronization. The Sysplex Timer provides that feature. And, of course, since data is shared across systems, a robust data sharing capability is needed, and this is provided in z/OS and in the individual subsystems (for example DB2 Data Sharing is required to exploit the Parallel Sysplex). In the software, z/OS implements a number of things to exploit the Parallel Sysplex enabling hardware. The Workload Manager component of z/OS takes care of cross-Sysplex workload balancing. And, each software subsystem (which we will talk more about later), such as IMS, RACF, and so forth, takes care of its exploitation of the Sysplex by handling cross-system synchronization and resource sharing. zCPO zClass Introduction to z/OS

59 Parallel Sysplex Hardware
Coupling Facility (2) Coupling Facility Control Code “LICC” Stand-alone CF Internal Coupling Facility – ICF Coupling Links (2) External Coupling Links – ISC, ICB Internal Coupling Link – IC Common Time Source 9037 Sysplex Timer (2) Server Time Protocol (STP) Hardware support Common resesources continued…..( Let’s talk a bit about how to scale the zSeries complex horizontally. In computer architectures, clustering, or “loosely coupled multiprocessing” is frequently used to connect multiple independent systems together to act as a single system image. Some processor architectures do this well, and some not so well. The Parallel Sysplex technology, as implemented on the z/OS operating system, is one of the industry’s best clustering technologies. Parallel Sysplex is a combination of hardware and software technology that allows a cluster of zSeries processors running z/OS to work together as a single system image. To run as a single image, several functions are needed, including data sharing, locking, cross-system workload dispatching, time synchronization, and a high-speed connection between systems. To accomplish this, the zSeries hardware architecture provides hardware functions such as the Coupling Facility, a hardware device that is, in essence, a large shared memory unit, connected via high-speed links such as the Integrated Cluster Bus or ISC. Also needed is a shared clock to provide time synchronization. The Sysplex Timer provides that feature. And, of course, since data is shared across systems, a robust data sharing capability is needed, and this is provided in z/OS and in the individual subsystems (for example DB2 Data Sharing is required to exploit the Parallel Sysplex). In the software, z/OS implements a number of things to exploit the Parallel Sysplex enabling hardware. The Workload Manager component of z/OS takes care of cross-Sysplex workload balancing. And, each software subsystem (which we will talk more about later), such as IMS, RACF, and so forth, takes care of its exploitation of the Sysplex by handling cross-system synchronization and resource sharing. zCPO zClass Introduction to z/OS

60 Coupling Facility – Glue for Communication
Within the Coupling Facility, storage is dynamically partitioned into structures. z/OS services manipulate data within the structures. Each of the following structures has a unique function: Cache structure: Supplies a mechanism called buffer invalidation to ensure consistency of cached data. The cache structure can also be used as a high-speed buffer for storing shared data with common read/write access. List structure: Enables authorized applications to share data that is organized in a set of lists, for implementing functions such as shared work queues and shared status information. Lock structure: Supplies shared and exclusive locking capability for serialization of shared resources down to a very small unit of data. The technology that makes high-performance sysplex data sharing possible is a combination of hardware and software services available in the supporting z/OS releases. A Coupling Facility can be a zSeries Coupling Facility standalone model or a logical partition of an IBM zSeries server. High-bandwidth fiber optic links known as Coupling Facility channels provide high-speed connectivity between the Coupling Facility and systems directly connected to it. A Coupling Facility can be a zSeries Coupling Facility standalone model or a logical partition of an IBM zSeries server. High-bandwidth fiber optic links known as Coupling Facility channels provide high-speed connectivity between the Coupling Facility and systems directly connected to it. zCPO zClass Introduction to z/OS

61 Internal Coupling Facility (ICF)
Spare CPs can be used as CF CPs ICFs can only run CFCC MSUs in ICFs "Don't Count“ Accessed via external Links ICF z/OS z/OS z/OS PR/SM ICF ICF CP CP CP CP CP CP links z/OS Dedicated ICFs zCPO zClass Introduction to z/OS

62 System Managed CF Strucutre Duplexing
OS: z/OS v1.2 or later ICFs: zSeries G5, G6 CFs: R06 or zSeries CFCC: Level 11 (G5/G6) or Level 12 or higher (zSeries) ICF ICF 11 12 1 11 12 1 10 2 10 2 9 3 9 3 8 4 8 4 7 6 5 7 6 5 Automatic Rebuild for planned reconfiguration Automatic switchover for unplanned outages Automatic duplexing re-establishment Overlapped requests for high performance Consistent Recovery Mechanism Reduced complexity Faster than structure rebuild Enables a robust "all-ICF" configuration zCPO zClass Introduction to z/OS

63 Parallel Sysplex Availability Technologies
Unplanned Outages Configure for no single point of HW/SW failure Fault tolerant HW, recoverable SW Failure isolation System detected (e.g. heartbeats, event triggers, soft fail thresholds) Policy managed (e.g. SFM, WLM, ARM, etc.) Dynamic workload Routing Planned Outages n, n+1 support Non-disruptive rolling change management Redundancy to address risk tolerance (e.g. 2 vs. 3 elements) Dynamic workload balancing Processes to support PS availability (e.g. change, problem, systems management) Thorough testing/training zCPO zClass Introduction to z/OS

64 Remember this Benchmark? Bank of China Benchmark
Database 380 million accounts 52 TB Storage 4 DS8300 BANCS CICS DB2 BANCS CICS DB2 19 CPs 19 CPs BANCS CICS DB2 BANCS CICS DB2 19 CPs 19 CPs IC CF 5 CPs CF 5 CPs IC ICB4 ICB4 Workload Driver 5 CPs Requirement 4,100 Transactions per second 54-way z9 54-way z9 zCPO zClass Introduction to z/OS

65 Bank of China Parallel Sysplex Benchmark
Near-Linear Scalability on a Parallel Sysplex running CICS and DB2 in a single system image with No Partitioning Required Goal: 4,100 TPS Huge scale up, requires huge I/O bandwidth capacity zCPO zClass Introduction to z/OS

66 Parallel Sysplex Software Cluster Technology
Software Component Function XCF Sysplex Communication/Status Monitoring/Group Services ARM Subsystem restart (within CEC or cluster) CFRM CF Resource Management Policy System Logger High performance logging, Merged logs WLM Goal oriented unit of work management WLM Enclaves Mult-system unit of work VTAM Generic Resource Network Single System Image VTAM MNPS High Availability Network Connection TCP/IP VIPA Network Single System Image TCP/IP VIPA take over/take back High Availability Network Connection CICSPlex/SM, IMS and MQ SMQ Transaction routing/balancing DB2 Sysplex Query Parallelism SQL Query de/re-composition Batch PipePlex Cluster I/O Piping ESCON Manager ESCON I/O Systems Mangement DB2, VSAM TVS, IMS/DB Full read/write data sharing IRLM Sysplex database locking Base Operating System Exploitation Resource Sharing Additional Subsystem Exploitation Resource/Data Sharing zCPO zClass Introduction to z/OS

67 Failure Recovery enabled by Sysplex & ARM
z/OS Workload Manager Sysplex-wide workload management to one policy Sysplex Failure Manager Specify failure detection and recovery actions Automatic Restart Manager Fast recovery of critical subsystems Cloning and symbolics Used to replicate applications across the nodes zCPO zClass Introduction to z/OS

68 zSeries Parallel Sysplex Resource Sharing
This is not to be confused with application data sharing This is sharing of physical system resources such as tape drives, catalogs, consoles This exploitation is built into z/OS Benefits System Management Performance Reduced hardware requirements Immediate Value $$$ zCPO zClass Introduction to z/OS

69 Exploiters Resource Sharing Enables PSLC Licensing Charges
zCPO zClass Introduction to z/OS

70 zCPO zClass Introduction to z/OS
Dynamic Workload Manager (WLM) zCPO zClass Introduction to z/OS

71 Intelligent Resource Director (IRD)
Manages resources within a server Processors and I/O Policy based Integration of z/OS Workload Manager Parallel Sysplex PR/SM™ Directs physical resources to workload Handles unpredictable workloads Increases resource efficiency zCPO zClass Introduction to z/OS

72 IRD, WLM and LPAR Clusters
IRD is code executing within the hardware WLM Manages performance of: Tasks within an address space Address Spaces within a z/OS image Subsystems within a z/OS image Subsystems across multiple images within a Sysplex LPAR clusters within a Sysplex on a single server … and more like TCP/IP Routing, creating address spaces to handle workload peaks … LPAR Clusters managed as a ‘group’ provide LPAR CPU Management WLM requests reassignment of virtual CPs based on LPAR weights (goals) defined by the IT shop Dynamic channel path management WLM Requests reassignment of virtual channel paths to improve I/O bandwidth to an LPAR based on weights (goals) defined by the IT shop Channel subsystem priority queuing LPAR WLM Requests reassignment of I/O priority for an LPAR to reduce I/O wait time for an LPAR’s I/O based on weights (goals) defined by the IT shop zCPO zClass Introduction to z/OS

73 Intelligent Resource Director LPAR CPU Management
Functions LPAR Weight Management Vary Logical CPU Management Benefits Manages CPU resources across LPARs in accordance with workload goals. Dynamically change LPAR weights No operator intervention required Manages tradeoffs between performance and efficient use of resources Simplifies capacity planning Balances multiprocessing level with processing speed Helps Reduce LPAR overhead Description LPAR Weight Management Dynamically manages a partition's CPU access based on workload demands and goals Vary Logical CPU Management Optimizes number of logical CPs based on partition's current weight and CPU consumption Benefits Provides flexibility in managing CPU resources across Logical PARtitions in accordance with workload goals. Dynamic change of LPAR weights No operator intervention required Manage tradeoffs between performance and efficient use of resources Prevent or mitigate possible capacity problems Balances multiprocessing level with processing speed for each workload Helps Reduce LPAR overhead zCPO zClass Introduction to z/OS

74 Intelligent Resource Director for Linux
IRD for Linux: z/OS WLM CPU Weight Management Supports Linux, VM, VSE, TPF Prerequisites Non-IFL Linux partition z/OS v1.2 or later z800, z900, or later zCPO zClass Introduction to z/OS

75 z/OS Intelligent Resource Director
Goal-oriented management of LPAR resources Processors & Channels Integration of: PR/SM Workload Manager Channel Subsystem Parallel Sysplex technology Providing: LPAR CPU management CF Z/OS A1 A2 A3 Channel Subsystem Intelligent Resource Director PR/SM Move Resources between LPARs A4 zCPO zClass Introduction to z/OS

76 Intelligent Resource Director Channel Subsystem Priority Queuing
Description I/O Priority Queuing prioritizes I/O within an LPAR Channel Subsystem Priority Queuing prioritizes I/O within an LPAR cluster Benefits Allows better channel resource management High priority work is given preferential access to the channel Can reduce channel requirements Managed within zSeries server

77 Intelligent Resource Director Dynamic Channel Path Management
Description Dynamically balances I/O connectivity based on workload demand Benefits More efficient use of hardware resource Reduces channel requirements Simplifies I/O configuration planning and definition zCPO zClass Introduction to z/OS

78 Batch Workload Balancing
Performance Rebalances batch initiators “Move” initiators to images with capacity More aggressively reducing them on constrained systems Starting new ones on less constrained systems Checking for potential rebalancing every 10 sec. Enhanced initiator balancing WLM batch initiator management was introduced with OS/390 R4 and OS/390 JES2 R4. Later, in OS/390 R8, JES3 started to provide similar functionality for JES3-environments. Beginning in OS/390 R4, WLM has the capability of controlling the rate at which queued jobs are initiated. Moreover, the WLM will dynamically change the number of WLM initiators and/or their work selection criteria in an attempt to meet installation defined goals. The installation can choose between JES-managed initiators and WLM-managed initiators by job class. Both types of initiators can coexist. With WLM-managed initiators, it is WLM who controls the number and placement of the initiator address spaces. New initiators are started when service class goals are missed, when a system is underutilized and there are jobs waiting to be selected, or when jobs have an affinity to a system on which no initiator is available yet. Decision factors are the available CPU and memory resources on a system, the service class' importance, and the projected net value on overall goal achievement. WLM stops initiators, when the number of started initiators is 1.5 times of the long term average queue length, when a system runs short of CPU or memory, or when the last initiator was inactive for 1 hour. With z/OS V1R4, WLM improves the balancing of WLM managed batch initiators between the systems of a sysplex. On highly utilized systems the number of initiators can be reduced while more are started on low utilized systems. This enhancement improves the performance of the sysplex with better use of the processing capacity of each system. WLM attempts to distribute the initiators across all members in the sysplex to optimize the throughput while taking care that jobs with affinities to specific systems are not hurt by WLM decisions. When the available CPU capacity of a system decreases to less than 5%, WLM stops an initiator address space when the current system is observed as the system with the highest CPU demand and when there is another remote system, that has enough available resources to start a new initiator. This evaluation is done every 10 seconds. The order of decrease is to stop initiators serving lower importance service classes first. WLM increases the number of initiators on lower utilized systems. To speed up job selection for a high volume of waiting jobs, up to 5 initiators can be started every 10 seconds on underutilized systems that have enough idle CPU and memory capacity. This value used to be 1 before z/OS V1R4. Benefits More aggressive rebalancing Rebalancing even takes place where no reduction of initiators happened before JES3 support in z/OS 1.5 zCPO zClass Introduction to z/OS

79 TCP/IP Workload Balancing
Spraying “Dumb” round robin DNS/WLM Request routed to best host to balance workload Network Distributor External box. Requires connectivity to each host Routes based upon WLM, user, application, QoS, etc. Similar to Cisco Multi-Node Load Balancer Sysplex Distributor No external box required. Connects to a node within Sysplex, Routes to host based upon WLM, user, application, QoS, etc. Better WLM coordination Removes complexities of multiple LPARs in a CEC w/ OSA Load Balancing Advisor The load balancer resides in the network (typically router-type node) zCPO zClass Introduction to z/OS

80 Dynamic VIPA / VIPA Takeover
Single System Image to IP Network Dynamic VIPA backup If a host suffers an outage, the stack may be moved to another host manually No configuration changes to routers VIPA Takeover This process is automated Coordinated with application dependencies VIPA Takeback Prior to planned outage “Takeback” after host brought back online Dynamic XCF Allows existing TCP/IPs to discover new IP stack being added to the Sysplex Use existing physical connectivity to network (only needs CF link) No definition changes zCPO zClass Introduction to z/OS

81 zSeries Sysplex Distributor
Provides a single Sysplex wide IP address built on dynamic VIPA Distributes network attachment based on application placement and recovery requirements Dynamic workload balancing Reduces planned outages Rolling upgrades Hardware changes Reduces unplanned outages Software & hardware failures Network failures Simplifies client view of zSeries TCP/IP DB2 z/OS-1 z/OS-2 z/OS-3 zCPO zClass Introduction to z/OS

82 zCPO zClass Introduction to z/OS
zFS – z File System z/FS is “Sysplex aware" for file systems Write requests forwarded to USS owner Reads can be managed in cache If owner fails, USS moves owner to another LPAR Improved Byte Range Lock Manager (BRLM) availability Locks replicated on a backup system zFS and HFS are both non-sysplex aware - they get their support for Shared HFS for free - it's all in USS (limited to zFS compatibility mode aggregates) USS determines which system is to own a R/W file system and forwards all requests to that system When a system goes down, USS moves R/W file systems owned by that system to another system R/O file systems are sysplex aware and requests are handled on each system Directions..... zFS sysplex aware (1.8) USS sends I/O requests directly to local zFS zFS decides whether the request needs to go to owning system zFS caching may avoid XCF communications zFS move zFS file system on a failing system (and to balance I/O) zFS sysplex direct file I/O (1.9) zFS does direct read from non-owning system zFS does direct file update from non-owning system zFS does direct file write, create from non-owning system (must call owing system to reserve space) Metadata updates done at owning system zCPO zClass Introduction to z/OS

83 Aspects of Availability
Continuous Operations Non-disruptive backups and system maintenance coupled with continuous availability of applications Disaster Recovery Protection against unplanned outages such as disasters through reliable, predictable recovery High Availability Fault-tolerant, failure-resistant infrastructure supporting continuous application processing Protection of critical business data Operations continue after a disaster Recovery is predictable and reliable Costs are predictable and manageable zCPO zClass Introduction to z/OS

84 Mainframe Disaster Recovery is Based on Parallel Sysplex
Distributed Production Distributed Development & Test Distributed Batch Distributed Production Distributed Development & Test Distributed Batch Primary Site Backup Site PTAM (Pick-up truck access method) Takeover and Restart It has been observed that many companies have business continuance plans developed on the premise that back office and manual processes will keep the business running until computer systems are available. Characteristics of these recovery models may allow critical applications to recover within 24 to 48 hours, with data loss potentially exceeding 24 hours, and full business recovery taking days or weeks. As companies transform their business to compete in the e-marketplace, business continuity strategies and availability requirements should be reevaluated to determine if they are based on today’s business objectives. Disk Mirroring Primary Site Backup Site Same systematic design for all applications and data Recovery is automatic and fast Integrity preserved Additional cost is minimal You must design site failover scheme for each application and database Recovery is manual and slow Easy to lose synchronization and integrity You must pay for duplicate hardware and software zCPO zClass Introduction to z/OS

85 zCPO zClass Introduction to z/OS
GDPS/PPRC Experience Bank Austria Creditanstalt Recovery window reduced from 48 hours to less than two hours Planned site switch completed in the two hour target Significant reduction of on-site manpower and skill level required to manage planned and unplanned reconfigurations Dynamic switchover of disk subsystems is between seconds No loss of committed data Met RTO and RPO objects, as well as reduced number and amount of manpower required during DR drills. zCPO zClass Introduction to z/OS

86 zCPO zClass Introduction to z/OS
PPRC and XRC Overview S/390 z/OS UNIX NT 1 4 3 2 PPRC PPRC (Metro Mirror) Synchronous remote data mirroring Application receives “I/O complete” when both primary and secondary disks are updated Typically supports metropolitan distance Performance impact must be considered Latency of 10 us/km 1 4 3 2 SDM XRC XRC (z/OS Global Mirror) Asynchronous remote data mirroring Application receives “I/O complete” as soon as primary disk is updated Unlimited distance support Performance impact negligible System Data Mover (SDM) provides Data consistency of secondary data Central point of control Peer to Peer Remote Copy (PPRC) is a synchronous copy technology. As soon as data is written to the Primary disk subsystem, the control unit forwards it on to the secondary. The secondary writes it and sends an acknowledgement back to the primary. At this point the primary lets the application know that the I/O completed. Since the control unit does not care where the I/O request came from, PPRC supports any operating system. eXtended Remote Copy (XRC) is an asynchronous copy technology. The System Data Mover (SDM) is a component of z/OS. It reads data from the Primary control units, and coordinates applying them to the Secondary volumes. XRC only works with z/OS data. GDPS is a multi-vendor solution. It supports IBM, EMC, and HDS disk. zCPO zClass Introduction to z/OS

87 GDPS – Geographically Distributed Parallel Sysplex
z/OS UNIX NT 1 4 3 2 ESCON® SDM Virtual Tape Controllers Primary Site Secondary Site Catalog TMC TCDB PPRC XRC PtPVTS Near Continuous Availability & Disaster Recovery GDPS/PPRC (Peer to Peer Remote Copy (PPRC) - Synchronous) Multisite Sysplex (fiber distance between sites up to 40 km - max) No or limited data loss in unplanned failover - user policy Planned and Unplanned reconfiguration support Disaster Recovery solution GDPS/XRC (eXtended Remote Copy (XRC) - Asynchronous) Supports unlimited distance Production systems in Site 1 Limited data loss to be expected in unplanned failover GDPS initiates restart of production systems in Site 2 Common functions (GDPS/PPRC and GDPS/XRC) GDPS solution manages tape resident data Point-in-time copy created (Flash Copy) intended to: Maintain D/R readiness during resynchronization Perform D/R testing while maintaining D/R readiness Management of zSeries Operating Systems zCPO zClass Introduction to z/OS

88 HyperSwap – the Technology
Brings different technologies together to provide a comprehensive application and data availability solution HyperSwap – the Technology Substitutes PPRC secondary for primary device Automatic – No operator interaction Fast – Can swap large number of devices Non-disruptive – applications keep running Includes volumes with Sysres, page DS, catalogs Hardware Triggers I/O Errors Boxed Devices Control Unit Failures IOS Timing Trigger Availability Autonomic detection of “soft” failures Customer defined timing thresholds to trigger Hyperswap Dual Site and Single Site Environments GDPS/PPRC GDPS/PPRC HyperSwap Manager P S application UCB PPRC GDPS/PPRC HyperSwap notes: The GDPS/PPRC HyperSwap function is designed to broaden the continuous availability attributes of GDPS/PPRC by extending the Parallel Sysplex redundancy to disk subsystems. Planned HyperSwap function provides the ability to: • Transparently switch all primary PPRC disk subsystems with the secondary PPRC disk subsystems for a planned reconfiguration • Perform disk configuration maintenance and planned site maintenance without requiring any applications to be quiesced. Planned HyperSwap function became generally available December, 2002. Unplanned HyperSwap function contains additional function to transparently switch to use secondary PPRC disk subsystems in the event of unplanned outages of the primary PPRC disk subsystems or a failure of the site containing the primary PPRC disk subsystems. Unplanned HyperSwap support allows: • Production systems to remain active during a disk subsystem failure. Disk subsystem failures will no longer constitute a single point of failure for an entire Parallel Sysplex. • Production servers to remain active during a failure of the site containing the primary PPRC disk subsystems if applications are cloned and exploiting data sharing across the two sites. Even though the workload in the second site will need to be restarted, an improvement in the Recovery Time Objective (RTO) will be accomplished. Unplanned HyperSwap function became generally available February, 2004. Site failover: Prod systems can stay active. Today, CF data is not consistent, so we need to restart DB2. We are moving in the direction to allow you to not need to restart DB2. RTO is minutes. Planned reconfigs: site maintenance. Can be done w/o app impact. If have MultiSite workload (single site workload -- need to bring it up on site2) App pauses 70 seconds while we switch UCB addresses but does not go down. zCPO zClass Introduction to z/OS

89 GDPS/PPRC HyperSwap Manager
Availability Extends Parallel Sysplex availability to disk subsystems System Management Simplifies management of Remote Copy configuration, reducing storage management costs Reduces time required for Remote Copy implementation Combines the features of Remote Copy management with the automation of GDPS Effective entry level offering Specially priced Tivoli NetView and System Automation products Positioned to upgrade to full GDPS GDPS / PPRC HyperSwap Application continuous availability during swap of GDPS / PPRC primary/secondary disk subsystems Key Points Significantly shortens GDPS / PPRC site switch time Subsequent releases will provide unplanned outage support with nondisruptive failover to secondary disk subsystems Value even within single site for control unit backup capability Script The GDPS / PPRC HyperSwap function broadens the continuous availability attributes of GDPS / PPRC by extending the Parallel Sysplex redundancy to disk subsystems. Before HyperSwap, if the Primary failed, UCBs pointed to primaries disk subsystems. One needed to re-IPL systems to point to Secondary copy. Now it takes seconds, fully automated, allowing all aspects of the site switch to be controlled through GDPS, and does not require re-IPLs, so the applications stay available. The next chart gives examples of IBM and customer benchmarks using HyperSwap. Stage 1 of the HyperSwap function (available since 2002), provided the ability to transparently switch all primary PPRC disk subsystems with the secondary PPRC disk subsystems for a planned switch reconfiguration. Stage 1 provides the ability to perform disk configuration maintenance and planned site maintenance without requiring any applications to be quiesced. Stage 2, available 1Q05, supports this for a single site solution, but still requires a Parallel Sysplex Stage 3 enhancements are planned to support a single server solution without the need for a Parallel Sysplex and to provide finer granularity to a control unit level. Large configurations can be supported as HyperSwap has been designed to provide capacity and capability to swap a large number of disk devices very quickly. For more information: Contact: or contact your local IBM representative. zCPO zClass Introduction to z/OS

90 What a Sysplex can do for YOU…
It will address any of the following types of work Large business problems that involve hundreds of end users, or deal with volumes of work that can be counted in millions of transactions per day. Work that consists of small work units, such as online transactions, or large work units that can be subdivided into smaller work units, such as queries. Concurrent applications on different systems that need to directly access and update a single database without jeopardizing data integrity and security. Provides reduced cost through Cost effective processor technology IBM software licensing charges in Parallel Sysplex Continued use of large-system data processing skills without re-education Protection of z/OS application investments The ability to manage a large number of systems more easily than other comparably performing multisystem environments zCPO zClass Introduction to z/OS

91 zCPO zClass Introduction to z/OS
TD Bank Best Practices Background TD Bank has been running Parallel Sysplex Sysplex wide availability % over 10 years Only 1.5 hours planned outage System z is used for Customer Account Data for applications supporting Tellers, Internet Banking and ATMs TD Bank Recommendations Keep sysplex up – do not bring it down Practice Rolling IPLs Exploit concurrent hardware upgrades Use automation Configure your sysplex for availability IMS/DB2 Data-sharing Transaction routing Sysplex Distributor for TCP/IP Online database reorganizations Clone each image Ensure applications exploit parallel sysplex Client Environment System z z/OS DB2 IMS WMQ GDPS Parallel Sysplex Deployment consists of five System z across two sites running 42 M business transactions a day 08 Apr 2002: IBM*** announced today that TD Bank Financial Group* is using IBM software and servers to host the next-generation infrastructure for its primary customer Web sites and e-business applications including Web-banking and online discount brokerage. TD Bank* has migrated its entire e-business infrastructure, choosing to run its high-volume applications on IBM software and hardware to support the Canadian component of its more than 3.8 million on-line customers. Using WebSphere*** infrastructure software and development tools, TD Bank* is combining its multiple retail sites into a single portal that integrates information and transactions. TD Bank's* new portal enables customers to seamlessly apply for mortgages, do their banking, view research online and complete trades. This provides an enhanced customer experience and presents the bank's services as a unified offering, rather than as disparate lines of business accessed on different sites. TD Bank* recently migrated to Java technology to rebuild its infrastructure and support its customer-facing Web sites: WebBroker** for online investing, and EasyWeb*, for Internet banking. The company chose Java because it is a faster and more flexible environment for development and offers a greater choice of tools and applications. The Java infrastructure also enables TD Bank* to reuse existing code and components to support retail and online customer interaction. The Bank* has future plans to exploit the benefits of Web services within their next generation applications. zCPO zClass Introduction to z/OS

92 zCPO zClass Introduction to z/OS
Summary Reduce cost compared to previous offerings of comparable function and performance Continuous availability even during change Dynamic addition and change Parallel sysplex builds on the strengths of the z/OS platform to bring even greater availability serviceability and reliability Scales out at low overhead, near linear scaling zCPO zClass Introduction to z/OS

93 Additional Information
GDPS The Ultimate e-business Availability Solution – GF Parallel Sysplex Here are some articles form the Web that may also help. The Alinean ROI Report - January 2004 How To Quantify Downtime itmanagementnews.com/2004/0311.html Research shows application downtime is costly Calculating the cost of downtime - Computerworld security/recovery/story/0,10801,91961,00.html Most firms cannot count cost of IT downtime | The Register How to calculate and convey the true cost of downtime techrepublic.com.com/ html The True Cost of Downtime zCPO zClass Introduction to z/OS

94 zCPO zClass Introduction to z/OS
Backups zCPO zClass Introduction to z/OS

95 Some z/OS Computing Concepts Shed some light on the subject

96 zCPO zClass Introduction to z/OS
Concepts - 1 Interrupts Interrupts are electrical signals built into the hardware to “interrupt” a unit of work. Interrupts can be good things (e.g. completion of an I/O operations) or bad things (e.g. a program error). An interrupt is presented to a processor. The processor circuits cause a state vector (the PSW –Program Status Word) of the executing task to be saved along with register values being used by the executing program (which is interrupted). The “appropriate” interrupt handler (a program residing in memory from boot) is given control. The interrupt handler saves the machine state of the interrupted task, handles the interrupt and passes control to the dispatcher. Unit of Work (UoW) The z/OS operating system provides support for a unit of work called a TASK. Sort of related to the thread in UNIX A task consists of one or more programs which execute, perform I/O … Tasks have a security identity, own resources, virtual storage, files, … There is an operating system component called the dispatcher that will give the next ready task control to use system resources. Before the dispatcher gives the task control of the system, it restores the machine state of the task. zCPO zClass Introduction to z/OS

97 zCPO zClass Introduction to z/OS
Concepts - 2 SRB – Supervisor request block Unit of work performed on behalf of the operating system Higher priority than a task Does not own resources, save CPU cycles Two types Local SRB for a specific address space (hold that thought), e.g. Cleaning up after I/O Global SRB for work that has to be done before any other system work is done, e.g. a recovery action zCPO zClass Introduction to z/OS

98 zCPO zClass Introduction to z/OS
Concepts - 3 z/OS is a multitasking operating system Only one task can execute on one processor at a time. The executing task is in active state. Note with SMT (Symmetric Multi-threading) more than one UoW can be executing at the same time on a processor. zSeries does not provide SMT. The non-executing tasks will be in different states: READY, WAIT, SUSPEND and different levels of waiting and ready. Tasks lose control due to interrupts. The interrupts may be of their own doing or result from work previously started by other tasks, or machine errors. z/OS is a multiprocessing operating system It is a Tightly Coupled Multi-Processing system (TCMP). One operating system controls the work on more than one processor all sharing the same real memory and I/O If there are 4 CP (Central Processors) there can be 4 tasks active simultaneously. It is a Loosely Coupled Multi-Processing System (LCMP). More than one server is sharing work, but not necessarily sharing the operating system, memory and I/O. This is clustering and in z/OS terms refers to a Sysplex. zCPO zClass Introduction to z/OS

99 zCPO zClass Introduction to z/OS
Concepts - 4 A JOB A job consists of control statements called JCL (Job Control Language) that are created by programmers to specify things like: Programs to be executed, datasets (files) to be used, created, deleted, etc. Runtime memory requirement and priority Job Class (for queuing and job selection) Etc Like a process in Unix A job consists of one or more steps (specified in the JCL). Steps may or may not be executed based on completion of prior steps. Each Job Step is one of more tasks. Jobs run in the “background” , i.e. they are not an interactive process. Also called BATCH jobs. In the past there was no interactive development environment on OS/360. Jobs were written on 80 column cards using keypunches. The cards or many jobs were fed into a card reader and spooled to a tape. Then they were read of the tape and executed. The jobs were batched onto the tape. Now the JCL and associated code is developed using TSO (Time Sharing Option) and ISPF (Interactive System Productivity Facility) TSO is an interactive component of z/OS: runs in the foreground. zCPO zClass Introduction to z/OS


Download ppt "SWG Competitive Project Office"

Similar presentations


Ads by Google