We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byTori Stream
Modified over 2 years ago
LPAR Capacity Planning Update © Al Sherkow I/S Management Strategies, Ltd. (414) Copyright© , I/S Management Strategies, Ltd., all rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent of the copyright owner. The OS/390 Expo and Performance Conference is granted a non-exclusive license to copy, reproduce or republish this presentation in whole or in part for conference handouts, conference CD, conference web site, and other activities only. I/S Management Strategies, Ltd. retains the right to distribute copies of this presentation to whomever it chooses. Session P12 Updated Presentation Available at
©I/S Management Strategies, Ltd., Trademarks Trademarks, these may be used throughout this presentation –Parallel Sysplex, PR/SM, Processor Resource/System Manager, OS/390*, S/390* are trademarks of IBM Corporation –Other trademarks, that may be used, are the property of their respective owners *Registered Trademarks
©I/S Management Strategies, Ltd., Goal and Objective Today's use of your resources –Visualization of LPARs –Visualization of Parallel Sysplexes –My experience: this is most difficult and often misrepresents the true use of resources What will change Preparing a plan –Updating the visualization Oct Announcements! Dropped Some Background Slides!
©I/S Management Strategies, Ltd., Consumption of Resources Consumers of Resources Growth of Business and Workload –same problems and issues weve always had magnitude of requirement time of requirement Limits –usually hardware, but could be software, database –batch windows –time for planned outages What is Capacity Planning? Predictions are always tricky, especially about the future, Yogi Berra
©I/S Management Strategies, Ltd., Important?? 15Feb1999 Why You May Need Capacity Planning -- It Reduces the constant need for upgrades It lets you better use idle capacity It allows better management of hardware
©I/S Management Strategies, Ltd., Important, or Not? 21Feb2000 Choose an architecture that can scale to at least 20 times what you really think youll need in six months? When was the last time anyone chewed you out for having too much disk-drive capacity?
©I/S Management Strategies, Ltd., Resources Working Together Processors –Number of CPs –Memory –Channels and Links Coupling Facilities –Also have CPs, memory and links I/O Keep Them Balanced
©I/S Management Strategies, Ltd., First, Is The Performance OK? What are the desired Goals for the workloads? Were the goals met? What response time and throughput were achieved? If a goal was not met determine why not?
©I/S Management Strategies, Ltd., LPAR Overview
©I/S Management Strategies, Ltd., Available Time on Physical System
©I/S Management Strategies, Ltd., LPAR Definitions The examples are a 5-way system with 3 partitions IBM, Amdahl or Hitachi does not matter for this discussion
©I/S Management Strategies, Ltd., Workload's Current Use
©I/S Management Strategies, Ltd., Current Use of the Physical System
©I/S Management Strategies, Ltd., Current Use of the Physical System
©I/S Management Strategies, Ltd., Trend Important Partition c9cap2 e
©I/S Management Strategies, Ltd., Represent Engines as Columns All the physical engines support all the logical engines As the utilization of the physical box approaches 100% the processing weights are used LPAR capacity limited by number of logical engines
©I/S Management Strategies, Ltd., Make It Easier to Understand Arrows Highlight the Logical pushpoint between Part A and Part C
©I/S Management Strategies, Ltd., Average Growth Over Time
©I/S Management Strategies, Ltd., Add Percentiles
©I/S Management Strategies, Ltd., Add Max to Percentiles
©I/S Management Strategies, Ltd., Consider: Averages or Percentiles? Averages Are Not Representative of Your Workload In an 8 Hour Shift With 15 Minute Intervals, There Are 160 Samples. 10%, 16 Samples, or 4 Hours Are Busier Than the P90 Value Many Would Argue, in Todays E-world You Should Use P95, P99 or Even MAX Percentiles Represent the Peaks Better But Percentiles Are Very Hard to Explain to Anyone, Technical or Management
©I/S Management Strategies, Ltd., Do Peaks Matter?
©I/S Management Strategies, Ltd., Waiting for CPU? Top Line is Partition Busy Bottom Line is PCTRDYWT
©I/S Management Strategies, Ltd., Waiting for CPU? Top line Whole Box Busy Spikey line PctRdyWt Bottom line LPAR Busy
©I/S Management Strategies, Ltd., Change Across an Upgrade
©I/S Management Strategies, Ltd., LPAR Review Views of available time One workload One LPAR, One LPAR of many Trending Representing logicals on physicals Averages, percentiles and peaks Latent Demand: Pct Ready Wait
©I/S Management Strategies, Ltd., Why You Want Parallel Sysplex Up to 32 OS/390 images managed as one Single image to applications Price/performance Granularity Scalability Availability
©I/S Management Strategies, Ltd., Sizing Coupling Facilities Data Sharing CFs should have MIPS that are 8% of the total in the Sysplex, or 10% of the data sharing workload. Try for %Busy < 50% Memory: Try out the CF Structure Sizer on IBMs website Links: for redundancy two from each image, watch power boundaries, and SAPs. Monitor RMF to determine if more are needed
©I/S Management Strategies, Ltd., Recovery Issues Avoid Single Points of Failure –Two CFs, two CPs in each CF, two Sysplex timers, multiple links, couple data sets on separate DASD subsystems Build failure-independent configurations Cannot rebuild ISGLOCK if left system is lost ISGLOCK is for GRS Star
©I/S Management Strategies, Ltd., What Does It cost? 1 CPU effect varies based on –data sharing workloads how much of system access to shared data –type of hardware for Links, CFs and CPUs –number of images, each adds about 1/2% System Level –resource sharing: 3% more –data sharing stress testing: 15% to 20% typical production: 5% to 11%
©I/S Management Strategies, Ltd., What Is Making Decisions? Two Sysplexes via Their 7 WLMs Three Partitioners Who sets the capacity? –The site through # of LPs and weights –The site through Goals –The partitioner does not know your goals –The WLM tries to satisfy your goals may be limited by # of LPs
©I/S Management Strategies, Ltd., What Can Push? 3 physical CPs, 2 LPs assigned to TEST 1 with weight of 33%, 3 LPs assigned to PROD 1 with weight of 66% Can the Parallel Sysplexes move the line, or only the partitioner?
©I/S Management Strategies, Ltd., What Can Push (IRD)? IRD Provides LPAR Clusters WLM talks to PR/SM z/900, z/OS in z/Architecture mode Optimizes CPU and Channels across LPARs
©I/S Management Strategies, Ltd., IRD-Channels Channels –Dynamic Channel-path Management Monitors I/O to LCUs Can Add or Remove Paths to an LCU Monitored with I/O Velocity 100* (device connect)/(device connect + channel pend time) Managed Channels Must Go To Switch Managed Channels Available to Only One LPAR Cluster
©I/S Management Strategies, Ltd., IRD-Channels Channels –Channel Subsystem Priority Queuing z900 Basic or LPAR mode z/OS sets this based on Goal Mode Policies –different calculation than WLMs I/O priorities –User sets up to 8 different values If 2 or more I/O requests are queued in the channel subsystem the CSS microcode honors priority order
©I/S Management Strategies, Ltd., IRD-CPU Management Manages Processor Weighting and Number of LPs in an LPAR Cluster by Goal Policies Sum of Partitions Weights is Viewed as a Pool, Controlled by the Site Value –Engines run with less interference because fewer time slices –Reduced overhead fewer LPs –Lets PR/SM understand the Goals New data: Partition min, max and avg weight, time at min and time at max
©I/S Management Strategies, Ltd., IRD-CPU Management Clustering Does Not Communicate Between Different Parallel Sysplexes A Single Parallel Sysplex Can Have LPAR Clusters on Multiple CECs A Single CEC Can Have Multiple LPAR Clusters Belonging to Separate Parallel Sysplexes
©I/S Management Strategies, Ltd., IRD-CPU Management Controls WLM CPU Management Functions can be Enabled/Disabled on an LPAR Basis Minimum and Maximum Partition Weight Partition Weight is Renamed to Initial Partition Weight
©I/S Management Strategies, Ltd., What Can Push?
©I/S Management Strategies, Ltd., What Can Push?
©I/S Management Strategies, Ltd., Engine Allocation
©I/S Management Strategies, Ltd., Average Utilization/Available
©I/S Management Strategies, Ltd., Capacity for Handling Peaks
©I/S Management Strategies, Ltd., Trend Similar to LPAR pstrend
©I/S Management Strategies, Ltd., Software Pricing z/OS, z/900 –Charges Based on LPAR Capacity –New External: Defined Capacity Rolling 4-hour average is limited by Defined Capacity Too Much Demand Leads to a Soft Cap –Can Run in Exception Mode! Records are Generated –White Space Can Have Engines Without LPARs Available for Spikes, Handled through 4-hour rolling average
©I/S Management Strategies, Ltd., Software Pricing: Why White Space 5 40 LPAR2 limit: 3*40 = 120 MSUs LPAR1 limit: 6*40 = 240 MSUs CICS Workload DB2 Workload Limit of Capacity is # of LPs or Weight In 100% Busy CEC zSeries 280 MSUs 40
©I/S Management Strategies, Ltd., Software Pricing White Space 5 40 White Space 55 MSUs LPAR2 defined 75 MSUs LPAR1 defined 150 MSUs Certificates CICS 225MSUs z/OS 225 MSUs DB2 75 MSUs CICS Workload DB2 Workload Sum of LPARs Must Be Less Than Phys Box White Space is Not Defined, It is Left Over by Your Configuration Must Use LPARs zSeries 280 MSUs
©I/S Management Strategies, Ltd., Summary LPAR –Capacity controlled by # of CPs –Flexible to 100% busy –WLMs do not talk to the partitioners IRD –Capacity on Demand may be writing on the wall Parallel Sysplex –WLMs in separate Sysplexes do not talk to each other –Your Sysplexes have goals that must be managed by you –Handling Peaks is more important than ever!
©I/S Management Strategies, Ltd., Al Sherkow (414) Questions? 1. King, Gary. OS/390 Conference Session P15. Oct Kelley, Joan. Many Coupling Facility Presentations 3. IBMs Parallel Sysplex website: 4. IBM eServer zSeries 900 Technical Guide. Oct2000. SG Workload License Charges & IBM License Manager (Expo Oct2000) Statistics are merely numbers and have no control over actual events. PBS, Savage Planet, Storms of the Century 06/17/2000
Further Adventures in Excel Using Lists…. Further Adventures in Excel Using Lists This lesson will cover: What is a List? How to: Sort Data.
January 18 & 20, Csci 2111: Data and File Structures Week2, Lecture 1 & 2 Secondary Storage and System Software: Magnetic Disks &Tapes.
SysPlex -What’s the problem Problems are growing faster than uni-processor….1980’s Leads to SMP and loosely coupled Even faster than SMP and loosely coupled.
Coupling Facility. The S/390 Coupling Facility (CF), the key component of the Parallel Sysplex cluster, enables multisystem coordination and datasharing.
©HCCS & IBM® 2008 Stephen Linkin1 Mainframe Hardware Systems And High Availability Stephen S. Linkin Houston Community College © HCCS and IBM 2008.
Introduction to the new mainframe © Copyright IBM Corp., All rights reserved. 1 Main Frame Computing Objectives Explain why data resides on mainframe.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 3: Scalability.
Integrating learning technology into teaching Dr Jay Dempster Educational Technology Service CENTRE FOR ACADEMIC PRACTICE IT in Sociology Teaching 11th.
1 SYSPLEX By : Seyed Hamid Alvani December Overview System/390 History Introduction to Sysplex What is Sysplex ? Why Sysplex ? Sysplex Philosophy.
2Q2008 System z High Availability – Parallel Sysplex TGVL: System z Foundation 1 System z High Availability – Value of Parallel Sysplex IBM System z z10.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 3: Scalability.
Introduction to the new mainframe: z/OS basics © Copyright IBM Corp., All rights reserved. Chapter 20 Parallel Sysplex.
An Intro to AIX Virtualization Philadelphia CMG September 14, 2007 Mark Vitale.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 2: Capacity.
SQL Server Resource Governor. Introduction To The Resource Governor Resource Governor was added in SQL Server 2008 Purpose is to manage resources by specifying.
Saving Software Costs with Group Capacity Richard S. Ralston OHVCMGMay 13, 2010.
Components of a Sysplex. A sysplex is not a single product that you install in your data center. Rather, a sysplex is a collection of products, both hardware.
Chapter 9 Overview Reasons to monitor SQL Server Performance Monitoring and Tuning Tools for Monitoring SQL Server Common Monitoring and Tuning.
CPU scheduling. Single Process one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting time is wasted 2.
Heterogeneity and Dynamicity of Clouds at Scale: Google Trace Analysis  4/24/2014 Presented by: Rakesh Kumar [1 ]
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
May 17, Isochrony and Synchronization Geert Knapen Philips ITCL-USA.
©2010 SoftwareOnZ Using AutoSoftCapping (ASC) to Manage Your z/OS Software Bill.
Systems Management Server 2.0: Backup and Recovery Overview SMS Recovery Web Site location: Updated.
Data Sharing. Data Sharing in a Sysplex Connecting a large number of systems together brings with it special considerations, such as how the large number.
This document is provided for informational purposes only and Microsoft makes no warranties, either express or implied, in this document. Information.
Al Morgan December Strobe Data Providing Solutions to Industry for 31 Years.
Web Performance Tuning Lin Wang, Ph.D. US Department of Education Copyright [Lin Wang] . This work is the intellectual property of the author. Permission.
Copyright © 2007 Ramez Elmasri and Shamkant B. Navathe Slide
Click to add text Introduction to z/OS Basics © 2009 IBM Corporation Chapter 2B Parallel Sysplex.
Copyright 2007, Information Builders. Slide 1 Performance and Tuning Mark Nesson, Vashti Ragoonath June 2008.
Global Analysis and Distributed Systems Software Architecture Lecture # 5-6.
©2010 SoftwareOnZ AutoSoftCapping (ASC) vWLC Manage your software Bill without the Performance Problems !
Lecture 4 CPU scheduling. Basic Concepts Single Process one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
Domino MailDomino AppsQuickPlace Sametime Domino WebHub / SMTP Top Ten Reasons to Consolidate Lotus Workloads on IBM eServer iSeries and eServer.
Technology in Action Alan Evans Kendall Martin Mary Anne Poatsy Twelfth Edition.
Scheduling Algorithems First Come First Serve Scheduling Shortest Job First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Capacity Planning for LAMP Architectures John Allspaw Manager, Operations Flickr.com Web Builder 2.0 Las Vegas.
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
CPU Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating.
11 SERVER CLUSTERING Chapter 6. Chapter 6: SERVER CLUSTERING2 OVERVIEW List the types of server clusters. Determine which type of cluster to use for.
Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 2: Capacity.
Chapter 11 Operating Systems. Outline Functional overview of an operating system Process management Resource allocation CPU allocation Memory allocation.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Microsoft Confidential © 2012 Microsoft Corporation. All rights reserved.
© 2016 SlidePlayer.com Inc. All rights reserved.