Selecting Evaluation Techniques Andy Wang CIS 5930 Computer Systems Performance Analysis.

Slides:



Advertisements
Similar presentations
1 CS533 Modeling and Performance Evaluation of Network and Computer Systems Selection of Techniques and Metrics (Chapter 3)
Advertisements

3-1 ©2013 Raj Jain Washington University in St. Louis Selection of Techniques and Metrics Raj Jain Washington.
1 CS533 Modeling and Performance Evaluation of Network and Computer Systems Capacity Planning and Benchmarking (Chapter 9)
Capacity Planning IACT 918 July 2004 Gene Awyzio SITACS University of Wollongong.
Lecture 36: Chapter 6 Today’s topic –RAID 1. RAID Redundant Array of Inexpensive (Independent) Disks –Use multiple smaller disks (c.f. one large disk)
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 19 Scheduling IV.
ITEC 451 Network Design and Analysis. 2 You will Learn: (1) Specifying performance requirements Evaluating design alternatives Comparing two or more systems.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
Network Capacity Planning IACT 418 IACT 918 Corporate Network Planning.
1 CSSE 477 – A bit more on Performance Steve Chenoweth Friday, 9/9/11 Week 1, Day 2 Right – Googling for “Performance” gets you everything from Lady Gaga.
Requirements Capture and Specification IACT424/924 Corporate Network Design and Implementation.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Cs238 CPU Scheduling Dr. Alan R. Davis. CPU Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU.
Data Communication and Networks Lecture 13 Performance December 9, 2004 Joseph Conron Computer Science Department New York University
Performance Evaluation
Copyright © 1998 Wanda Kunkle Computer Organization 1 Chapter 2.1 Introduction.
Differentiated Multimedia Web Services Using Quality Aware Transcoding S. Chandra, C.Schlatter Ellis and A.Vahdat InfoCom 2000, IEEE Journal on Selected.
Measuring Performance Chapter 12 CSE807. Performance Measurement To assist in guaranteeing Service Level Agreements For capacity planning For troubleshooting.
G Robert Grimm New York University Pulling Back: How to Go about Your Own System Project?
1 Internet Management and Security We will look at management and security of networks and systems. Systems: The end nodes of the Internet Network: The.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Capacity planning for web sites. Promoting a web site Thoughts on increasing web site traffic but… Two possible scenarios…
1 Part VI System-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved.
Top-Down Network Design Chapter Two Analyzing Technical Goals and Tradeoffs Copyright 2010 Cisco Press & Priscilla Oppenheimer.
Receiver-driven Layered Multicast Paper by- Steven McCanne, Van Jacobson and Martin Vetterli – ACM SIGCOMM 1996 Presented By – Manoj Sivakumar.
Draft-constantine-ippm-tcp-throughput-tm-00.txt 1 TCP Throughput Testing Methodology IETF 76 Hiroshima Barry Constantine
Analysis of Simulation Results Andy Wang CIS Computer Systems Performance Analysis.
Existing Network Study CPIT 375 Data Network Designing and Evaluation.
RAID: High-Performance, Reliable Secondary Storage Mei Qing & Chaoxia Liao Nov. 20, 2003.
by Marc Comeau. About A Webmaster Developing a website goes far beyond understanding underlying technologies Determine your requirements.
ICOM 6115: COMPUTER SYSTEMS PERFORMANCE MEASUREMENT AND EVALUATION Nayda G. Santiago August 18, 2006.
M. Keshtgary Spring 91 Chapter 3 SELECTION OF TECHNIQUES AND METRICS.
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
Global NetWatch Copyright © 2003 Global NetWatch, Inc. Factors Affecting Web Performance Getting Maximum Performance Out Of Your Web Server.
Performance Evaluation of Computer Systems Introduction
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
Test Loads Andy Wang CIS Computer Systems Performance Analysis.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
Lecture 2 Page 1 CS 239, Spring 2007 Metrics and Review of Basic Statistics CS 239 Experimental Methodologies for System Software Peter Reiher April 5,
Chapter 3 System Performance and Models. 2 Systems and Models The concept of modeling in the study of the dynamic behavior of simple system is be able.
FPGA-Based System Design: Chapter 6 Copyright  2004 Prentice Hall PTR Topics n Design methodologies.
2000 년 11 월 20 일 전북대학교 분산처리실험실 TCP Flow Control (nagle’s algorithm) 오 남 호 분산 처리 실험실
ICOM 6115: Computer Systems Performance Measurement and Evaluation August 11, 2006.
Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental.
Chapter 3 System Performance and Models Introduction A system is the part of the real world under study. Composed of a set of entities interacting.
Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ Koo-Min Ahn.
Modeling Virtualized Environments in Simalytic ® Models by Computing Missing Service Demand Parameters CMG2009 Paper 9103, December 11, 2009 Dr. Tim R.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Understanding Performance in Operating Systems Andy Wang COP 5611 Advanced Operating Systems.
© 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final.
Computer Communication & Networks Lecture # 10. Bandwidth ϞBand Width means the width of the band (channel) ϞIt is the maximum capacity of a link ϞThat.
For a good summary, visit:
Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental.
ECE 259 / CPS 221 Advanced Computer Architecture II (Parallel Computer Architecture) Evaluation – Metrics, Simulation, and Workloads Copyright 2004 Daniel.
Distance Vector Routing
1 Evaluation of Cooperative Web Caching with Web Polygraph Ping Du and Jaspal Subhlok Department of Computer Science University of Houston presented at.
The Anatomy of a Large-Scale Hypertextual Web Search Engine S. Brin and L. Page, Computer Networks and ISDN Systems, Vol. 30, No. 1-7, pages , April.
Data Network Design and Evaluation Dr Usman Saeed Assistant Professor Faculty of Computing and Information Technology North Jeddah Branch King Abdulaziz.
Test Loads Andy Wang CIS Computer Systems Performance Analysis.
Spark on Entropy : A Reliable & Efficient Scheduler for Low-latency Parallel Jobs in Heterogeneous Cloud Huankai Chen PhD Student at University of Kent.
OPERATING SYSTEMS CS 3502 Fall 2017
System Performance.
Selecting Evaluation Techniques
Network Performance and Quality of Service
CPE 619 Selection of Techniques and Metrics
Understanding Performance in Operating Systems
Performance Evaluation of Computer Networks
CS246: Search-Engine Scale
Performance Evaluation of Computer Networks
Presentation transcript:

Selecting Evaluation Techniques Andy Wang CIS 5930 Computer Systems Performance Analysis

Decisions to be Made Evaluation Technique Performance Metrics Performance Requirements

Evaluation Techniques Experimentation isn’t always the answer. Alternatives: Analytic modeling (queueing theory) Simulation Experimental measurement But always verify your conclusions!

Analytic Modeling Cheap and quick Don’t need working system Usually must simplify and make assumptions

Simulation Arbitrary level of detail Intermediate in cost, effort, accuracy Can get bogged down in model-building

Measurement Expensive Time-consuming Difficult to get detail But accurate

Selecting Performance Metrics Three major perfomance metrics: –Time (responsiveness) –Processing rate (productivity) –Resource consumption (utilization) Error (reliability) metrics: –Availability (% time up) –Mean Time to Failure (MTTF/MTBF) Same as mean uptime Cost/performance

Response Time How quickly does system produce results? Critical for applications such as: –Time sharing/interactive systems –Real-time systems –Parallel computing

Examples of Response Time Time from keystroke to echo on screen End-to-end packet delay in networks –How about dropped packets? OS bootstrap time Leaving Love to getting food in Oglesby –Edibility is not a factor

Measures of Response Time Response time: request-response interval –Measured from end of request –Ambiguous: beginning or end of response? Reaction time: end of request to start of processing Turnaround time: start of request to end of response

The Stretch Factor Response time usually goes up with load Stretch Factor measures this: Load Response Time Low stretch High stretch

Processing Rate How much work is done per unit time? Important for: –Sizing multi-user systems –Comparing alternative configurations –Multimedia

Examples of Processing Rate Bank transactions per hour File-transfer bandwidth Aircraft control updates per second Jurassic Park customers per day

Power Consumption Metric Watt (power) = joule (energy) / second Joule = execution time x watt (consumed at a given point) –Insufficient to just reduce peak power by 2x at the cost of increasing the computational time by > 2x A device may be powered in spikes

Network Signal Metric dBm = 10 x log(1000 * watt) 1 watt = 30 dBm

Measures of Processing Rate Throughput: requests per unit time: MIPS, MFLOPS, MB/s, TPS Nominal capacity: theoretical maximum: bandwidth Knee capacity: where things go bad Usable capacity: where response time hits a specified limit Efficiency: ratio of usable to nominal capacity

Nominal, Knee, and Usable Capacities Response-Time Limit Knee Usable Capacity Knee Capacity Nominal Capacity

Resource Consumption How much does the work cost? Used in: –Capacity planning –Identifying bottlenecks Also helps to identify “next” bottleneck

Examples of Resource Consumption CPU non-idle time Memory usage Fraction of network bandwidth needed Square feet of beach occupied

Measures of Resource Consumption Utilization: where u(t) is instantaneous resource usage –Useful for memory, disk, etc. –May be tricky Busy disk != doing useful work –Due to mechanical overheads If u(t) is always either 1 or 0, reduces to busy time or its inverse, idle time –Useful for network, CPU, etc.

Error Metrics Successful service (speed) –(Not usually reported as error) Incorrect service (reliability) No service (availability)

Examples of Error Metrics Time to get answer from Google Dropped Internet packets ATM down time Wrong answers from IRS

Measures of Errors Reliability: P(error) or Mean Time Between Errors (MTBE) Availability: –Downtime: Time when system is unavailable, may be measured as Mean Time to Repair (MTTR) –Uptime: Inverse of downtime, often given as Mean Time Between Failures (MTBF/MTTF)

Financial Measures When buying or specifying, cost/performance ratio is often useful Performance chosen should be most important for application

Characterizing Metrics Usually necessary to summarize Sometimes means are enough Variability is usually critical –A mean I-10 freeway speed of 55 MPH doesn’t help plan rush-hour trips

Types of Metrics Global across all users Individual First helps financial decisions, second measures satisfaction and cost of adding users

Choosing What to Measure Pick metrics based on: Completeness (Non-)redundancy Variability

Completeness Must cover everything relevant to problem –Don’t want awkward questions from boss or at conferences! Difficult to guess everything a priori –Often have to add things later

Redundancy Some factors are functions of others Measurements are expensive Look for minimal set Again, often an interactive process

Variability Large variance in a measurement makes decisions impossible Repeated experiments can reduce variance –Expensive –Can only reduce it by a certain amount Better to choose low-variance measures to start with

Classes of Metrics: HB Higher is Better: Utility Throughput Better

Classes of Metrics: LB Lower is Better: Utility Response Time Better

Classes of Metrics: NB Nominal is Best: Free Disk Space Best Utility

Setting Performance Requirements Good requirements must be SMART: Specific Measurable Acceptable Realizable Thorough

Example: Web Server Users care about response time (end of response) Network capacity is expensive  want high utilization Pages delivered per day matters to advertisers Also care about error rate (failed & dropped connections)

Example: Requirements for Web Server 2 seconds from request to first byte, 5 to last Handle 25 simultaneous connections, delivering 100 KB/s to each 60% mean utilization, with 95% or higher less than 5% of the time <1% of connection attempts rejected or dropped

Is the Web Server SMART? Specific: yes Measurable: may have trouble with rejected connections Acceptable: response time and aggregate bandwidth might not be enough Realizable: requires T3 link; utilization depends on popularity Thorough? You decide

Other Web Server Metrics Advertisers might want to count pages delivered daily Error metrics are a big issue: –Availability –Mean Time to Repair

Remaining Web Server Issues Redundancy: response time is closely related to bandwidth, utilization Variability: all measures could vary widely. –Should we specify variability limits for other than utilization?

White Slide