3-1 ©2013 Raj Jain Washington University in St. Louis Selection of Techniques and Metrics Raj Jain Washington.

Slides:



Advertisements
Similar presentations
2k-p Fractional Factorial Designs
Advertisements

16-1 ©2006 Raj JainCSE574sWashington University in St. Louis Energy Management in Ad Hoc Wireless Networks Raj Jain Washington University in Saint Louis.
Chapter 5: CPU Scheduling
Advanced Piloting Cruise Plot.
Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
Chapter 1 The Study of Body Function Image PowerPoint
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
UNITED NATIONS Shipment Details Report – January 2006.
Document #07-12G 1 RXQ Customer Enrollment Using a Registration Agent Process Flow Diagram (Switch) Customer Supplier Customer authorizes Enrollment.
Document #07-12G 1 RXQ Customer Enrollment Using a Registration Agent Process Flow Diagram (Switch) Customer Supplier Customer authorizes Enrollment.
Scalable Routing In Delay Tolerant Networks
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
Multiplying binomials You will have 20 seconds to answer each of the following multiplication problems. If you get hung up, go to the next problem when.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
A Bandwidth Allocation/Sharing/Extension Protocol for Multimedia Over IEEE Ad Hoc Wireless LANs Shiann-Tsong Sheu and Tzu-fang Sheu IEEE JOURNAL.
Timesheet Processing and
1 CS533 Modeling and Performance Evaluation of Network and Computer Systems Selection of Techniques and Metrics (Chapter 3)
Chapter 18 Methodology – Monitoring and Tuning the Operational System Transparencies © Pearson Education Limited 1995, 2005.
Software testing.
1 Quality of Service Issues Network design and security Lecture 12.
Chapter 4 Memory Management Basic memory management Swapping
ABC Technology Project
© Paradigm Publishing, Inc Access 2010 Level 1 Unit 1Creating Tables and Queries Chapter 2Creating Relationships between Tables.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public ITE PC v4.0 Chapter 1 1 Distance Vector Routing Protocols Routing Protocols and Concepts –
Module 10: Virtual Memory
Virtual Memory II Chapter 8.
1 Improving TCP Performance over Mobile Networks HALA ELAARAG Stetson University Speaker : Aron ACM Computing Surveys 2002.
Chapter 15 Integrated Services Digital Network ISDN Services History Subscriber Access Layers BISDN WCB/McGraw-Hill The McGraw-Hill Companies, Inc., 1998.
11-1 ©2007 Raj JainCSE571SWashington University in St. Louis Kerberos V5 Raj Jain Washington University in Saint Louis Saint Louis, MO 63130
การทดสอบโปรแกรม กระบวนการในการทดสอบ
VOORBLAD.
BIOLOGY AUGUST 2013 OPENING ASSIGNMENTS. AUGUST 7, 2013  Question goes here!
© 2012 National Heart Foundation of Australia. Slide 2.
Lets play bingo!!. Calculate: MEAN Calculate: MEDIAN
Chapter 10 Software Testing
1 Introduction to Network Layer Lesson 09 NETS2150/2850 School of Information Technologies.
RED-PD: RED with Preferential Dropping Ratul Mahajan Sally Floyd David Wetherall.
Addition 1’s to 20.
25 seconds left…...
Introduction to Queuing Theory
Week 1.
We will resume in: 25 Minutes.
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
Network Operations & administration CS 4592 Lecture 15 Instructor: Ibrahim Tariq.
PSSA Preparation.
11-1 FRAMING The data link layer needs to pack bits into frames, so that each frame is distinguishable from another. Our postal system practices a type.
Chapter 13 Preparing The Systems Proposal Systems Analysis and Design Kendall and Kendall Fifth Edition.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2001 Chapter 16 Integrated Services Digital Network (ISDN)
CpSc 3220 Designing a Database
1 McGill University Department of Civil Engineering and Applied Mechanics Montreal, Quebec, Canada.
Delay Analysis and Optimality of Scheduling Policies for Multihop Wireless Networks Gagan Raj Gupta Post-Doctoral Research Associate with the Parallel.
Benchmark Series Microsoft Excel 2013 Level 2
Exploring Energy-Latency Tradeoffs for Broadcasts in Energy-Saving Sensor Networks AUTHOR: MATTHEW J. MILLER CIGDEM SENGUL INDRANIL GUPTA PRESENTER: WENYU.
ICOM 6115: COMPUTER SYSTEMS PERFORMANCE MEASUREMENT AND EVALUATION Nayda G. Santiago August 18, 2006.
M. Keshtgary Spring 91 Chapter 3 SELECTION OF TECHNIQUES AND METRICS.
Selecting Evaluation Techniques Andy Wang CIS 5930 Computer Systems Performance Analysis.
Performance Evaluation of Computer Systems Introduction
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
Chapter 3 System Performance and Models Introduction A system is the part of the real world under study. Composed of a set of entities interacting.
System Performance.
Selecting Evaluation Techniques
CPE 619 Selection of Techniques and Metrics
Presentation transcript:

3-1 ©2013 Raj Jain Washington University in St. Louis Selection of Techniques and Metrics Raj Jain Washington University in Saint Louis Saint Louis, MO These slides are available on-line at:

3-2 ©2013 Raj Jain Washington University in St. Louis Overview Criteria for Selecting an Evaluation Technique Three Rules of Validation Selecting Performance Metrics Commonly Used Performance Metrics Utility Classification of Metrics Setting Performance Requirements

3-3 ©2013 Raj Jain Washington University in St. Louis Criteria for Selecting an Evaluation Technique TexPoint fonts used in EMF: AAA A AAA

3-4 ©2013 Raj Jain Washington University in St. Louis Three Rules of Validation Do not trust the results of an analytical model until they have been validated by a simulation model or measurements. Do not trust the results of a simulation model until they have been validated by analytical modeling or measurements. Do not trust the results of a measurement until they have been validated by simulation or analytical modeling.

3-5 ©2013 Raj Jain Washington University in St. Louis Selecting Performance Metrics

3-6 ©2013 Raj Jain Washington University in St. Louis Selecting Metrics Include: Performance Time, Rate, Resource Error rate, probability Time to failure and duration Consider including: Mean and variance Individual and Global Selection Criteria: Low-variability Non-redundancy Completeness

3-7 ©2013 Raj Jain Washington University in St. Louis Case Study: Two Congestion Control Algorithms Service: Send packets from specified source to specified destination in order. Possible outcomes: Some packets are delivered in order to the correct destination. Some packets are delivered out-of-order to the destination. Some packets are delivered more than once (duplicates). Some packets are dropped on the way (lost packets).

3-8 ©2013 Raj Jain Washington University in St. Louis Case Study (Cont) Performance: For packets delivered in order, Time-rate-resource Response time to deliver the packets Throughput: the number of packets per unit of time. Processor time per packet on the source end system. Processor time per packet on the destination end systems. Processor time per packet on the intermediate systems. Variability of the response time Retransmissions Response time: the delay inside the network

3-9 ©2013 Raj Jain Washington University in St. Louis Case Study (Cont) Out-of-order packets consume buffers Probability of out-of-order arrivals. Duplicate packets consume the network resources Probability of duplicate packets Lost packets require retransmission Probability of lost packets Too much loss cause disconnection Probability of disconnect

3-10 ©2013 Raj Jain Washington University in St. Louis Case Study (Cont) Shared Resource Fairness Fairness Index Properties: Always lies between 0 and 1. Equal throughput Fairness =1. If k of n receive x and n-k users receive zero throughput: the fairness index is k/n.

3-11 ©2013 Raj Jain Washington University in St. Louis Case Study (Cont) Throughput and delay were found redundant ) Use Power. Variance in response time redundant with the probability of duplication and the probability of disconnection Total nine metrics.

3-12 ©2013 Raj Jain Washington University in St. Louis Commonly Used Performance Metrics Response time and Reaction time

3-13 ©2013 Raj Jain Washington University in St. Louis Response Time (Cont)

3-14 ©2013 Raj Jain Washington University in St. Louis Capacity

3-15 ©2013 Raj Jain Washington University in St. Louis Common Performance Metrics (Cont) Nominal Capacity: Maximum achievable throughput under ideal workload conditions. E.g., bandwidth in bits per second. The response time at maximum throughput is too high. Usable capacity: Maximum throughput achievable without exceeding a pre-specified response-time limit Knee Capacity: Knee = Low response time and High throughput

3-16 ©2013 Raj Jain Washington University in St. Louis Common Performance Metrics (cont) Turnaround time = the time between the submission of a batch job and the completion of its output. Stretch Factor: The ratio of the response time with multiprogramming to that without multiprogramming. Throughput: Rate (requests per unit of time) Examples: Jobs per second Requests per second Millions of Instructions Per Second (MIPS) Millions of Floating Point Operations Per Second (MFLOPS) Packets Per Second (PPS) Bits per second (bps) Transactions Per Second (TPS)

3-17 ©2013 Raj Jain Washington University in St. Louis Common Performance Metrics (Cont) Efficiency: Ratio usable capacity to nominal capacity. Or, the ratio of the performance of an n-processor system to that of a one-processor system is its efficiency. Utilization: The fraction of time the resource is busy servicing requests. Average fraction used for memory.

3-18 ©2013 Raj Jain Washington University in St. Louis Common Performance Metrics (Cont) Reliability: Probability of errors Mean time between errors (error-free seconds). Availability: Mean Time to Failure (MTTF) Mean Time to Repair (MTTR) MTTF/(MTTF+MTTR)

3-19 ©2013 Raj Jain Washington University in St. Louis Utility Classification of Metrics

3-20 ©2013 Raj Jain Washington University in St. Louis Setting Performance Requirements Examples: The system should be both processing and memory efficient. It should not create excessive overhead There should be an extremely low probability that the network will duplicate a packet, deliver a packet to the wrong destination, or change the data in a packet. Problems: Non-Specific Non-Measurable Non-Acceptable Non-Realizable Non-Thorough SMART

3-21 ©2013 Raj Jain Washington University in St. Louis Case Study 3.2: Local Area Networks Service: Send frame to D Outcomes: Frame is correctly delivered to D Incorrectly delivered Not delivered at all Requirements: Speed The access delay at any station should be less than one second. Sustained throughput must be at least 80 Mbits/sec. Reliability: Five different error modes. Different amount of damage Different level of acceptability.

3-22 ©2013 Raj Jain Washington University in St. Louis Case Study (Cont) The probability of any bit being in error must be less than 1E-7. The probability of any frame being in error (with error indication set) must be less than 1%. The probability of a frame in error being delivered without error indication must be less than 1E-15. The probability of a frame being misdelivered due to an undetected error in the destination address must be less than 1E-18. The probability of a frame being delivered more than once (duplicate) must be less than 1E-5. The probability of losing a frame on the LAN (due to all sorts of errors) must be less than 1%.

3-23 ©2013 Raj Jain Washington University in St. Louis Case Study (Cont) Availability: Two fault modes – Network reinitializations and permanent failures The mean time to initialize the LAN must be less than 15 milliseconds. The mean time between LAN initializations must be at least one minute. The mean time to repair a LAN must be less than one hour. (LAN partitions may be operational during this period.) The mean time between LAN partitioning must be at least one-half a week.

3-24 ©2013 Raj Jain Washington University in St. Louis Summary of Part I Systematic Approach: Define the system, list its services, metrics, parameters, decide factors, evaluation technique, workload, experimental design, analyze the data, and present results Selecting Evaluation Technique: The life-cycle stage is the key. Other considerations are: time available, tools available, accuracy required, trade-offs to be evaluated, cost, and saleability of results.

3-25 ©2013 Raj Jain Washington University in St. Louis Summary (Cont) Selecting Metrics: For each service list time, rate, and resource consumption For each undesirable outcome, measure the frequency and duration of the outcome Check for low-variability, non-redundancy, and completeness. Performance requirements: Should be SMART. Specific, measurable, acceptable, realizable, and thorough.

3-26 ©2013 Raj Jain Washington University in St. Louis Homework 3: Exercise 3.1 What methodology would you choose: a. To select a personal computer for yourself? b. To select 1000 workstations for your company? c. To compare two spread sheet packages? d. To compare two data-flow architectures? if the answer was required: i. Yesterday? ii. Next quarter? iii. Next year? Prepare a table of 12 entries. Write 1 line explanation of each of 12 choices. Common Mistake: Not specifying all 12 combinations.

3-27 ©2013 Raj Jain Washington University in St. Louis Thank You!

3-28 ©2013 Raj Jain Washington University in St. Louis Solution to Homework #3

3-29 ©2013 Raj Jain Washington University in St. Louis Exercise 3.2 Make a complete list of metrics to compare: a. Two personal computers b. Two database systems c. Two disk drives d. Two window systems