©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA www.cpexpert.com 1 Introduction to HiperDispatch Management Mode with z10 NCACMG meeting.

Slides:



Advertisements
Similar presentations
Scheduling in Web Server Clusters CS 260 LECTURE 3 From: IBM Technical Report.
Advertisements

CPU Scheduling Tanenbaum Ch 2.4 Silberchatz and Galvin Ch 5.
Technology & Operations - Enterprise Infrastructure Enterprise Platform Services Customer Experiences with HiperDispatch & Soft Capping in IBM Mainframe.
© 2008 Gelb Information Systems Corp Think Faster with Gelb Information Ivan Gelb, GIS Corp. Z10 News and Views Friday, November 14,
PowerVM Live Partitioned Mobility A feature of IBM Virtualization Presented by Group 3 Mayra Longoria Mehdi Jafry Ken Lancaster PowerVM Live Partitioned.
Allocating Memory.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 3: Scalability.
©HCCS & IBM® 2008 Stephen Linkin1 Mainframe Hardware Systems And High Availability Stephen S. Linkin Houston Community College © HCCS and IBM 2008.
1 Routing and Scheduling in Web Server Clusters. 2 Reference The State of the Art in Locally Distributed Web-server Systems Valeria Cardellini, Emiliano.
Chapter 1 Computer System Overview Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
CS 104 Introduction to Computer Science and Graphics Problems
Computer System Overview
Computer System Overview
Fair Scheduling in Web Servers CS 213 Lecture 17 L.N. Bhuyan.
Computer Organization and Architecture
Computer System Overview Chapter 1. Basic computer structure CPU Memory memory bus I/O bus diskNet interface.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 3: Scalability.
CS364 CH08 Operating System Support TECH Computer Science Operating System Overview Scheduling Memory Management Pentium II and PowerPC Memory Management.
Stephen Linkin Houston Community College January 15, 2007 © Mike Murach & Associates, HCC, IBM 1 Introduction To IBM Mainframe Systems Chapter.
Budapesti Műszaki és Gazdaságtudományi Egyetem Méréstechnika és Információs Rendszerek Tanszék Scheduling in Windows Zoltan Micskei
Simulation of Memory Management Using Paging Mechanism in Operating Systems Tarek M. Sobh and Yanchun Liu Presented by: Bei Wang University of Bridgeport.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
Chapter 1 Computer System Overview Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
Computer Systems Overview. Page 2 W. Stallings: Operating Systems: Internals and Design, ©2001 Operating System Exploits the hardware resources of one.
Management of Waiting Lines McGraw-Hill/Irwin Copyright © 2012 by The McGraw-Hill Companies, Inc. All rights reserved.
©2010 SoftwareOnZ AutoSoftCapping (ASC) vWLC Manage your software Bill without the Performance Problems !
An Intro to AIX Virtualization Philadelphia CMG September 14, 2007 Mark Vitale.
Ihr Logo Operating Systems Internals & Design Principles Fifth Edition William Stallings Chapter 1 Computer System Overview.
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
Chapter 101 Multiprocessor and Real- Time Scheduling Chapter 10.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 2: Capacity.
6 Memory Management and Processor Management Management of Resources Measure of Effectiveness – On most modern computers, the operating system serves.
Performance Prediction for Random Write Reductions: A Case Study in Modelling Shared Memory Programs Ruoming Jin Gagan Agrawal Department of Computer and.
The Performance of Spin Lock Alternatives for Shared-Memory Multiprocessors THOMAS E. ANDERSON Presented by Daesung Park.
1 Tuning Garbage Collection in an Embedded Java Environment G. Chen, R. Shetty, M. Kandemir, N. Vijaykrishnan, M. J. Irwin Microsystems Design Lab The.
Bi-Hadoop: Extending Hadoop To Improve Support For Binary-Input Applications Xiao Yu and Bo Hong School of Electrical and Computer Engineering Georgia.
Operating System Isfahan University of Technology Note: most of the slides used in this course are derived from those of the textbook (see slide 4)
Time Parallel Simulations I Problem-Specific Approach to Create Massively Parallel Simulations.
A BRIEF INTRODUCTION TO CACHE LOCALITY YIN WEI DONG 14 SS.
Coupling Facility. The S/390 Coupling Facility (CF), the key component of the Parallel Sysplex cluster, enables multisystem coordination and datasharing.
Ensieea Rizwani An energy-efficient management mechanism for large-scale server clusters By: Zhenghua Xue, Dong, Ma, Fan, Mei 1.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
Chapter 1 Computer System Overview Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
©2010 SoftwareOnZ Using AutoSoftCapping (ASC) to Manage Your z/OS Software Bill.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Saving Software Costs with Group Capacity Richard S. Ralston OHVCMGMay 13, 2010.
From the Trenches OHVCMG May 13, 2010 Richard S. Ralston Antarctica.
Computer Systems Overview. Lecture 1/Page 2AE4B33OSS W. Stallings: Operating Systems: Internals and Design, ©2001 Operating System Exploits the hardware.
McGraw-Hill/Irwin Copyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 18 Management of Waiting Lines.
1 Computer System Overview Chapter 1. 2 Operating System Exploits the hardware resources of one or more processors Provides a set of services to system.
Optimizing Distributed Actor Systems for Dynamic Interactive Services
WebSphere XD Compute Grid High Performance Architectures
Web Server Load Balancing/Scheduling
Virtual memory.
Optimal Performance When Running CICS in a Shared LPAR Environment
Chapter 2 Memory and process management
Web Server Load Balancing/Scheduling
Chapter 9 – Real Memory Organization and Management
REAL QUESTIONS,100% PASSING GUARANTEED
Economics, Administration & Information system
Presentation & Demo August 7, 2018 Bill Shelden.
Chapter 5: CPU Scheduling
Multiprocessor and Real-Time Scheduling
COMP60621 Fundamentals of Parallel and Distributed Systems
COMP60611 Fundamentals of Parallel and Distributed Systems
Two Threads Are Better Than One
Presentation transcript:

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 1 Introduction to HiperDispatch Management Mode with z10 NCACMG meeting June 11, 2008

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 2 Highlights of HiperDispatch Why HiperDispatch Management Mode? Hardware cache reload reducing performance Large number of active logical processors causing multiprocessor effect Redesign of z/OS Dispatcher Multiple affinity dispatch queues Dynamic LPAR weight distribution to logical processors Redesign of PR/SM algorithms Establish affinity nodes of up to 4 processors Establish High Polarity logical processors

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 3 z10 HARDWARE WITHIN A BOOK

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 4 EXAMPLE MULTIPLE BOOKS t

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 5 LPARWEIGHTSHARE#CP#LCP%CP/LCP LPAR140040%4.8596% LPAR240040% % LPAR315015%1.8360% LPAR4505%0.6230% TOTAL % CPC has 12 physical processors Common PR/SM Definition Problem

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 6 Common PR/SM Definition Problem Consider simple PR/SM definition for production LPAR: 12 logical processors in one LPAR PR/SM gives equal share to each logical processor Resulting logical processor busy: Perhaps 40% per processor if using share Causes multiprocessor effect Each logical processor has low access to physical processor, which can cause “Short Engine” effect Solution is counter-intuitive to management!

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 7 Brief HiperDispatch overview – z/OS (Normal operation - not MASTER or SYSSTC) z/OS knows the total weight allocated to the LPAR z/OS periodically provides PR/SM with weight (and corresponding share) of each logical processor High polarity logical processors = 100% share of equivalent physical processor (up to 4 logical processors per affinity pool) Medium polarity logical processors = less than 100% share of equivalent physical processor, share is divided equally among logical processors in affinity pool (up to 2 logical processors) Low polarity logical processors = 0% share of equivalent physical processor Dynamic management and adjustment of number of logical processors in each category.

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 8 z/OS AFFINITY DISPATCH QUEUES

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 9 Brief HiperDispatch overview – PR/SM PR/SM creates affinity nodes of up to four physical processors, and dispatches the logical processors to physical processors in the affinity node High polarity logical processors(100% share of physical processor) Dispatch to physical processor affinity node Pseudo-dedicated processors PR/SM maintains affinity between logical and physical Medium polarity Total share less than 1.5 equivalent physical processors Dispatch to affinity group of physical processors Logical processors have equal share Low polarity 0% share PR/SM “parks” these logical processors

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 10 PR/SM AFFINITY NODE – HIGH POLARITY PROCESSORS

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 11 HiperDispatch Benefits Optimizes reuse of hardware cache L1 and L1.5 cache reused by dispatch to same processor L2 cache reused by dispatch to same book Minimize number of logical processors Reduce multiprocessor overhead Minimize “short engine” effect IBM estimates up to 10% improvement in ITR – depends on: The number of physical processors The number of logical processors assigned to LPAR The logical/physical processor ratio The size of locality of memory reference (working set)

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 12 Performance Considerations Serialization of resource use by applications Application might wait longer for access to processor Performance goals might need to be adjusted Repeatability of application execution time Impact of HiperDispatch on LPARs not in HiperDispatch Management Mode Could have available capacity reduced because of high polarity processors dedicated by HiperDispatch Could have skew of capacity to logical processors because of HiperDispatch minimizing logical processors

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 13 Implementation Considerations Not for LPARs with low share of CPC capacity Minimum 0.5 equivalent physical processor required 1.5 equivalent physical processors for High polarity Not for LPARs that have 2 or 3 logical processors Not much benefit for small z10 environments Review IRD specifications Review zIIP and zAAP specifications Review performance goals and goal importance

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 14 RMF variables for HiperDispatch SMF70PFL BIT MEANING WHEN SET 0 Content of SMF70UPI valid 1 Group flag 2 Polarization flag. This partition is vertically polarized. That is, HiperDispatch mode is active. The SMF70POW fields That is, HiperDispatch mode is active. The SMF70POW fields in the logical processor data section are valid for logical in the logical processor data section are valid for logical processors of this partition. processors of this partition. SMF70HHF BIT MEANING WHEN SET 0 HiperDispatch supported 1 HiperDispatch is active 2 HiperDispatch status changed during interval SMF70POW Weight for the logical processor when HiperDispatch mode is active. The value may be the same or different for all shared logical processors of the type described by this PR/SM record. SMF70PAT Logical processor parked time. These variables require APAR OA24074 and APAR OA12774

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 15 CPExpert initial analysis CPU access might be denied because of HiperDispatch Service class period missed goal HiperDispatch minimizes logical processors – fewer “servers” Serialization of resource queues HiperDispatch could not be activated for LPAR with low weight LPAR weight less than 0.5 physical processor HiperDispatch could not establish a high polarity processor LPAR weight less than 1.5 physical processor

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 16 CPExpert initial analysis HiperDispatch might not be appropriate for LPAR Small number of logical processors assigned Most logical processors were parked Potentially inadequate logical processors assigned to LPAR 90% of LPAR capacity share used for RMF interval No logical processors parked Excess capacity on CPC # logical processors less than shared processors on CPC Potentially inadequate zAAP processors assigned to LPAR Potentially inadequate zIIP processors assigned to LPAR

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 17 CPExpert initial analysis HiperDispatch was turned off because IRD decreased weight New weight yields less than 0.5 physical processor share Increase Minimum Weight for IRD High polarity processor not established, IRD decreased weight New weight yields less than 1.5 physical processor share Increase Minimum Weight for IRD Information findings HiperDispatch was turned back on, IRD increased weight High polarity processor established, IRD increased weight HiperDispatch was turned on/off by operator action ** = Applies to LPARs not in HiperDispatch Management Mode

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 18 CPExpert initial analysis (LPARs not in HiperDispatch Management Mode) LPAR could not use the defined logical processors HiperDispatch used pseudo-dedicated physical processors Number of “free” physical processors less than logical processors assigned to LPAR LPAR’s available capacity is diminished significantly LPAR could not use the defined logical zAAP processors LPAR could not use the defined logical zIIP processors Logical processors in LPAR had skewed share of capacity HiperDispatch uses large percent of physical processors LPAR cannot use “equal share” access to physical processors

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 19 References System z9 109 PR/SM Planning Guide (SB ) - Chapter 3: Determining the Characteristics of Logical Partitions System z10 PR/SM Planning Guide (SB ) - Chapter 3: Determining the Characteristics of Logical Partitions z/OS Intelligent Resource Director (SG ) - Chapter 3: How WLM LPAR CPU Management works (Section 3.5: LPAR Weights) IBM System z10 Enterprise Class Technical Introduction (SG ) IBM System z10 Enterprise Class Technical Guide (SG ) “WLM–Update for z/OS Release 9 and IBM System z10, The Latest and Greatest” (Horst Sinram,WLM Development, IBM Corporation, Boeblingen, Germany), Session 2540, SHARE Orlando 2008 “What’s New with z/OS Release 10" (John Eells, IBM Poughkeepsie), Session 2838 SHARE Orlando 2008 WP "z/OS: Planning Considerations for HiperDispatch Mode" (Steve Grabarits, IBM System z Performance)

©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 20 For more information, please contact Don Deese Computer Management Sciences, Inc. 634 Lakeview Drive Hartfield, VA Phone:(804) Fax:(804) Visit for draft of “Introduction to HiperDispatch Management Mode with z10"