1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.

Slides:



Advertisements
Similar presentations
CPU Scheduling.
Advertisements

3-1 ©2013 Raj Jain Washington University in St. Louis Selection of Techniques and Metrics Raj Jain Washington.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 19 Scheduling IV.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Performance Evaluation
Copyright © 1998 Wanda Kunkle Computer Organization 1 Chapter 2.1 Introduction.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Computer System Lifecycle Chapter 1. Introduction Computer System users, administrators, and designers are all interested in performance evaluation. Whether.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
ICOM 6115: COMPUTER SYSTEMS PERFORMANCE MEASUREMENT AND EVALUATION Nayda G. Santiago August 18, 2006.
Introduction to Discrete Event Simulation Customer population Service system Served customers Waiting line Priority rule Service facilities Figure C.1.
M. Keshtgary Spring 91 Chapter 3 SELECTION OF TECHNIQUES AND METRICS.
AN INTRODUCTION TO THE OPERATIONAL ANALYSIS OF QUEUING NETWORK MODELS Peter J. Denning, Jeffrey P. Buzen, The Operational Analysis of Queueing Network.
Modeling and Performance Evaluation of Network and Computer Systems Introduction (Chapters 1 and 2) 10/4/2015H.Malekinezhad1.
Lecture 2 Process Concepts, Performance Measures and Evaluation Techniques.
Performance Evaluation of Computer Systems Introduction
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 3 System Performance and Models. 2 Systems and Models The concept of modeling in the study of the dynamic behavior of simple system is be able.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
ICOM 6115: Computer Systems Performance Measurement and Evaluation August 11, 2006.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems with Multi-programming Chapter 4.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Chapter 3 System Performance and Models Introduction A system is the part of the real world under study. Composed of a set of entities interacting.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
NETE4631: Network Information System Capacity Planning (2) Suronapee Phoomvuthisarn, Ph.D. /
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
(C) J. M. Garrido1 Objects in a Simulation Model There are several objects in a simulation model The activate objects are instances of the classes that.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
Traffic Simulation L2 – Introduction to simulation Ing. Ondřej Přibyl, Ph.D.
CPSC 531: System Modeling and Simulation
OPERATING SYSTEMS CS 3502 Fall 2017
System Performance.
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
Selecting Evaluation Techniques
Where are being used the OS?
Process Scheduling B.Ramamurthy 9/16/2018.
CPE 619 Selection of Techniques and Metrics
Chapter 6: CPU Scheduling
Operating Systems CPU Scheduling.
Process Scheduling B.Ramamurthy 11/18/2018.
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Process Scheduling B.Ramamurthy 12/5/2018.
Chapter5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Chapter 5: CPU Scheduling
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 4/11/2019.
Process Scheduling B.Ramamurthy 4/7/2019.
Uniprocessor scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Chapter 6: CPU Scheduling
Chapter-5 Traffic Engineering.
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Presentation transcript:

1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari for providing us his slides for this lecture. In the Name of the Most High

2 Outline Introduction to performance evaluation Objectives of performance evaluation Techniques of performance evaluation Metrics in performance evaluation

3 Introduction Computer system users, administrators, and designers are all interested in performance evaluation. The goal in system performance evaluation is to provide the highest performance at the lowest cost. Computer performance evaluation has important role in selection of computer systems, design of systems and applications, and analysis of existing systems.

4 Objectives of Performance Study Evaluating design alternatives (system design) Comparing two or more systems (system selection) Determining the optimal value of a parameter (system tuning) Finding the performance bottleneck (bottleneck identification) Characterizing the load on the system (workload characterization) Determining the number and sizes of components (capacity planning) Predicting the performance at future loads (forecasting).

5 Basic Terms System: Any collection of hardware, software and network. Metrics: Criteria used to analysis the performance of the system or components. Workloads: The requests made by the users of the system.

6 Performance Evaluation Activities Performance evaluation of a system can be done at different stages of system development System in planning and design stage  Use high level models to obtain performance estimates for alternative system configurations and alternative designs. System is operational  Measure the system behavior with a view to improve the performance  Develop validated model that can be used for performance prediction and capacity planning.

7 Techniques for Performance Evaluation Performance measurement  Obtain measurement data by observing the events and activities on an existing system Performance modeling  Represent the system by a model and manipulate the model to obtain information about system performance

8 Performance Measurement Measure the performance directly on a system Need to characterize the workload placed on the system during measurement Generally provide the most valid results Nevertheless, not very flexible  May be difficult (or even impossible) to vary some workload parameters

9 Performance Modeling Model  An abstraction of the system obtained by making a set of assumptions about how the system works  Capture the essential characteristics of the system Reasons of using models  Experimenting with the real system may be too costly too risky, or too disruptive to system operation  System may only be in the design stage

10 Performance Modeling Workload characterization  Capture the resource demands and intensity of the load brought to the system Performance metrics  The measure of interest, such as mean response time, the number of transactions completed per second, the ratio of blocked connection requests, etc.

11 Performance Modeling Solution methods  Analytic modeling  Simulation modeling

12 Analytic Modeling Mathematical methods are used to obtain solutions to the performance measures of interest Numerical results are easy to compute if a simple analytic solution is available Useful approach when one only needs rough estimates of performance measures Solutions to complex models may be difficult to obtain

13 Simulation Modeling Develop a simulation program that implements the model Run the simulation program and use the data collected to estimate the performance measurement of interest A system can be studied at an arbitrary level of detail It may be costly to develop and run the simulation program

14 Stochastic Model Model contains some random input components which are characterized by probability distributions, e.g., time between arrivals to a system by exponential distribution Output is also random, and provides probability distributions of the performance measures of interest

15 Queuing Model The most commonly used model to analyze the performance of computer systems and networks. Single queue: models a component of overall system, such as CPU, disk, communication channel Network of queues: models system components and their interaction.

16 Steps in Performance Modeling

17 Commonly Used Performance Metrics Response Time  Turnaround time  Reaction time  Stretch factor Throughput  Operations/second Jobs per second Requests per second Millions of Instructions Per Second (MIPS) Millions of Floating Point Operations Per Second (MFLOPS) Packets Per Second (PPS) Bits per second (bps) Transactions Per Second (TPS  Efficiency  Utilization

18 Commonly Used Performance Metrics (Cont…) Reliability  R(t)  MTTF Availability  Mean Time to Failure (MTTF)  Mean Time to Repair (MTTR)  MTTF/(MTTF+MTTR)

19 Response Time Interval between user’s request and system response Time User’s Request System’s Response

20 Response Time (cont…) Can have two measures of response time  Both ok, but 2 preferred if execution long Time User Finishes Request System Starts Response User Starts Request System Finishes Response System Starts Execution Reaction Time Response Time 1 Response Time 2

21 Response Time (cont…) Turnaround time: time between submission of a job and completion of output  For batch job systems Reaction time: Time between submission of a request and beginning of execution  Usually need to measure inside system since nothing externally visible Stretch factor: ratio of response time at load to response time at minimal load  Most systems have higher response time as load increases

22 Throughput Rate at which requests can be serviced by system (requests per unit time)

23 Efficiency Ratio of maximum achievable throughput (ex: 9.8 Mbps) to nominal capacity (ex: 10 Mbps)  98% For multiprocessor systems, ratio of n-processor to that of one-processor (in MIPS or MFLOPS) Efficiency Number of Processors

24 Utilization Typically, fraction of time resource is busy serving requests  Time not being used is idle time  System managers often want to balance resources to have same utilization Ex: equal load on CPUs But may not be possible. Ex: CPU when I/O is bottleneck May not be time  Processors: busy / total  Memory: fraction used / total

25 Miscellaneous Metrics Reliability  Probability of errors or mean time between errors (error- free seconds) Availability  Fraction of time system is available to service requests (fraction not available is downtime)  Mean Time To Failure (MTTF) is mean uptime Useful, since availability high (downtime small) may still be frequent and no good for long request

26 Definition of Reliability Recommendations E.800 of the International Telecommunications Union (ITU-T) defines reliability as follows: “The ability of an item to perform a required function under given conditions for a given time interval.” In this definition, an item may be a circuit board, a component on a circuit board, a module consisting of several circuit boards, a base transceiver station with several modules, a fiber-optic transport-system, or a mobile switching center (MSC) and all its subtending network elements. The definition includes systems with software.

27 Basic Definitions of Reliablity Reliability R(t): X : time to failure of a system : F(t) : distribution function of system lifetime Mean Time To system Failure: f(t) : density function of system lifetime

28 Definition of Availability Availability is closely related to reliability, and is also defined in ITU-T Recommendation E.800 as follows: "The ability of an item to be in a state to perform a required function at a given instant of time or at any instant of time within a given time interval, assuming that the external resources, if required, are provided." An important difference between reliability and availability is that reliability refers to failure-free operation during an interval, while availability refers to failure-free operation at a given instant of time, usually the time when a device or system is first accessed to provide a required function or service

29 Three Rules of Validation Do not trust the results of a simulation model until they have been validated by analytical modeling or measurements. Do not trust the results of an analytical model until they have been validated by a simulation model or measurements. Do not trust the results of a measurement until they have been validated by simulation or analytical modeling.

30 Class Policy Text Book Probability and Statistics with Reliability, Queuing and Computer Science Applications, second edition, by K.S. Trivedi, Publisher- John Wiley & Sons Grading Policy Home works including Simulation 25% Term project and Presentation 25% Final Exam 50%