Presentation is loading. Please wait.

Presentation is loading. Please wait.

TWS for z/OS end-to-end & Z centric Implementation Best Practices

Similar presentations


Presentation on theme: "TWS for z/OS end-to-end & Z centric Implementation Best Practices"— Presentation transcript:

1 TWS for z/OS end-to-end & Z centric Implementation Best Practices

2 The goals Provide some best practices and suggestions related to the end-to-end features provided by TWS for z/OS: End-to-end with fault tolerant capabilities (aka e2e) End-to-end with Z centric capabilities (aka Z centric) They are intended to: Improve the performances Provide suggestions for a more effective usage of the feature The Agenda Overview Z centric Feature exploitation Best practices e2e 2

3 IBM Tivoli Workload Automation end-to-end features
... From a single point of control... Heterogeneous workloads must be seen as homogeneous, and managed from a single point of control through a unified paradigm Single point of control to minimize the administrative oversight and time Full impact view from the point of service delivery, for better efficiency, effectiveness and alignment of IT to business Provide flexibility to establish the single point of control from any end-point End-to-end workload automation 3

4 End-to-end with fault tolerant capabilities
It is the Plan-based End-to-end, this means: Hierarchical topology Optional multi-level configuration Communication layer, USS Server Network Fault Tolerance Distribution of plan to agents Monitoring allowed at agent level TWS for z/OS controller ISPF z/OS Sysplex End-to-end TWS for z/OS Topology Server Open systems Domain Manager Fault Tolerant Agent Extended Agent 4

5 End-to-end with Z centric capabilities
The TWS controller acts as choreographer, this means: Flat topology Simplified deployment and configuration No communication layer, no Server Fully centralized and homogeneous control Direct control over distributed workload TWS for z/OS Topology Server TWS for z/OS controller ISPF z/OS Sysplex Z-centric Open systems Z-centric agent Extended Agent 5

6 Cross dependencies They allows to manage heterogeneous systems by:
Defining in one scheduling environment dependencies on batch activities that are managed by another scheduling environment Controlling the status of these dependencies by navigating from a unique user interface across the different scheduling environments It is works both for TWS and TWS for z/OS TWS 1 TWS for z/OS controller TWS 2 TWS MDM Cross dep. TWS 3 TWS for z/OS controller Cross dep. 6

7 TWS for z/OS controllers
Z centric – feature exploitation (1) The z centric feature can be easily activated, by customizing the controller PARM member, by: Setting the HTTPOPTS parameters Adding in the ROUTOPTS statement the HTTP/HTTPS parameters It’s very easy to add, delete or modify z centric WSs. There is a MODIFY command dynamically reloading the destination definitions: /F TW1A,RFRDEST The same Z centric agent can run workload for various TWS for z/OS controllers at the same time This greatly simplify the infrastructure Reduces the maintenance effort TWS for z/OS controllers Z centric Agent 7

8 Z centric – feature exploitation (2)
It is possible to define more Z centric WSs having the same destination, this: Allows to globally act on a subset of the overall workload ran by a certain server can be very useful if the same server runs workload related to different LOBs. The Z centric WSs grant the same flexibility of the z/OS WSs, this means: Open time intervals; Alternate WSs; Parallel servers It is possible to use the TWS for z/OS variables to tailor in a “centralized way” the workload on many distributed servers. In the example on the right the supplied variable related to the extended op. name, is used to parameterize the remote file name of an FTP job 8

9 Z centric – feature exploitation (3)
The z centric agents support the filewatch feature It’s an executable able to perform advanced file discovery File creation, deletion, edition. An example: filewatch -condition wcr -filename C:\ftpdir\ftp.file -int 30 -deadline 0 It is can be very usefully integrated with the FTP job executor to automate file discovery and transfer scenarios. Just create an application running z centric jobs where: The first one runs the filewatch executable Its successor run an FTP job. Subsequent jobs perform the file content elaboration Z centric Agent TWS for z/OS controller file DATASET file elaboration FTP Filewatch file discovery 9

10 Z centric – best practices
The settings defined in the EQQUX001 override the other settings. A typical error scenario: If a job ends in error with ext. status OSUB It may be the user submitting the job is that defined in the exit and not the right one The TWS controller has to resolve the IP addrs of the z centric agents and vice versa. A typical error scenario: The TWS for z/OS user interface shows a jobs in “started” status The job has been really submitted on the server hosting the z centric agent In this case setting the HTTPOPTS HOSTNAME keyword can solve the issue. Consider that the TWS controller tries to connect a Z centric agent only when it has to run the first daily job. Check on the real status of the agent by scheduling a TSO WSSTAT command, i.e. WSSTAT SUBSYS(TW1A) WSNAME(ZAGT) STATUS(A) This can be very useful for agents running workload during the night 10

11 E2E – feature exploitation (1)
If an FTA is NOT a backup DM than always set CPUFULLSTAT(OFF), this: Greatly reduces the network traffic Reduces the number of events the active DMs have to manage Always use the mailman servers by setting the CPUSERVER keyword when defining an FTA or a standard agent. This: Increases the event handling speed performed by the DMs Make the e2e network more robust in case of network problems Always keep the agents running jobs linked by predecessor-successor dependencies in the same domain, this: Reduces the network traffic Increases the fault tolerance level FTA2 DM FTA1 TWS domain 11

12 Master Domain Manager OPCMASTER
E2E – feature exploitation (2) When designing e2e applications consider they will be “mirrored” in Symphony Job streams. Take in consideration the main “mirroring” rules: The jobs defined on FTWS are present in the Symphony file Just the direct predecessors of those jobs (even if they are not scheduled on FTWS) are present in the Symphony file as well. Designing the applications so to have smaller Symphony Job streams grant various positive effects: Reduce the DP batch duration thanks to the shorter time needed to create the Symphony file Increase the performances when e2e applications are dynamically added in the CP Increase the performance of the TWS agents especially when they runs not centralized scripts Master Domain Manager OPCMASTER Current Plan Domain Manager DOM1 Symphony FTA FTW2 FTA FTW1 Symphony Symphony 12

13 E2E – feature exploitation (3)
It can be useful adding a dummy predecessor on NON REP WS (see op. 15) The Symphony file can contain unconnected operations, so creating applications whose jobs are connected just by a dummy successor can make smaller Symphony job streams (see ops. 30 and 20) z/OS Plan App1 5 App4 10 10 App5 10 15 App1 5 Symphony App2 255 30 20 10 App2 255 15 App3 35 25 15 255 App3 30 20 15 13

14 E2E – best practices (1) Consider to increase the size of the datasets EQQTWSCS, EQQTWSIN EQQTWSOU Event loss could occur if thy are note well sized Especially if a lot of dynamic additions are performed Take in consideration the evtzise cmd to increase the max reachable size of the event files (such as Mailbox.msg or Intercom.msg) This works both for the files present in the e2e server work directory and for the TWS agents The manual “Scheduling End-to-end with Fault Tolerance Capabilities” documents this command usage. In case of server maintenance take in consideration to set the CPUREC parameter CPUAUTOLNK(OFF) It makes “manual” the TWS agent initialization No time is wasted in trying to initialize it A Symphony renew is sufficient to activate it 14

15 E2E – best practices (2) The workload throughput of the agent can be globally managed by using the CPUREC CPULIMIT parameter. This is very useful if a server hosting a TWS agent seems to be too much stressed By setting it to 0 it’s possible to keep the agent active avoiding the job submission The parameter can be changed: by submitting a symphony renew In dynamic way, by using the TWS agent admin. CLI conman (i.e.: “conman lc FTA1; 0” ) If different kind of workloads have to be run on the same server (i.e. they refer to different LOBs) It’s NOT needed the installation of more agents (this could stress the server) Some agents could be “simulated” by using local UNIX extended agents This is possible just on UNIX servers 15

16 16


Download ppt "TWS for z/OS end-to-end & Z centric Implementation Best Practices"

Similar presentations


Ads by Google