Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2003, Carla Ellis Self-Scaling Benchmarks Peter Chen and David Patterson, A New Approach to I/O Performance Evaluation – Self-Scaling I/O Benchmarks,

Similar presentations


Presentation on theme: "© 2003, Carla Ellis Self-Scaling Benchmarks Peter Chen and David Patterson, A New Approach to I/O Performance Evaluation – Self-Scaling I/O Benchmarks,"— Presentation transcript:

1 © 2003, Carla Ellis Self-Scaling Benchmarks Peter Chen and David Patterson, A New Approach to I/O Performance Evaluation – Self-Scaling I/O Benchmarks, Predicted I/O Performance, SIGMETRICS 1993.

2 Workloads Experimental environment prototype real sys exec- driven sim trace- driven sim stochastic sim Live workload Benchmark applications Micro- benchmark programs Synthetic benchmark programs Traces Distributions & other statistics monitor analysis generator Synthetic traces “Real” workloads Made-up © 2003, Carla Ellis Data sets You are here

3 © 2003, Carla Ellis Goals A benchmark that automatically scales across current and future systems –It dynamically adjusts to system under test Predicted performance based on self- scaling evaluation results –Estimate performance for unmeasured workloads –Basis for comparing different systems

4 © 2003, Carla Ellis Characteristics of an Ideal I/O Benchmark Benchmark should 1.Help in understanding why, isolate reasons for poor performance 2.Be I/O limited 3.Scale gracefully 4.Allow fair comparisons among machines 5.Be relevant to a wide range of applications 6.Be tightly specified, reproducible, explicitly state assumptions Current benchmarks fail

5 © 2003, Carla Ellis Overview of Approach Step 1: scaling: Benchmark automatically explores workload space to find relevant workload. –By depending on system under test, the ability to compare systems on benchmark results is lost Step 2: Predicted performance scheme helps restore that capability –Accuracy of prediction must be assured

6 © 2003, Carla Ellis Workload Parameters uniqueBytes – total size of data accessed sizeMean – average size of an I/O request –Individual requests chosen from normal distribution readFrac – fraction of reads; fraction of writes is 1- readFrac seqFrac – fraction of requests that are sequential access –For multiple processes, each has its own thread processNum – concurrency Workload is user-level program with parameters set

7 Representativeness Does such a synthetic workload have the “right” set of parameters to capture a real application (characterized by its values for that set of parameters)?

8 Benchmarking Results Set of performance graphs, one for each parameter, while holding all other parameters fixed at their focal point values. –75% performance point –Found by iterative search process More of workload space is explored Does not capture dependencies among parameters

9 focal point = (21MB, 10KB, 0,1,0)

10 © 2003, Carla Ellis Families of Graphs General applicability – representative across range of parameter (75% rationale) Multiple performance regions – especially evident for uniqueBytes because of storage hierarchy issues –On border – unstable –mid-range focal points

11 cache disk Larger requests better Reads are better than writes Sequential helps Sequential has little effect

12 © 2003, Carla Ellis Predicted Performance Problem: benchmark chosen will be different for 2 different systems so they can not be directly compared. Solution: Estimate performance for unmeasured workloads so a common set of benchmarks can be used for comparisons

13 © 2003, Carla Ellis How to Predict Assume the shape of performance curve for one parameter is independent of values of other parameters. Use self-scaling benchmark to measure with all but one parameter fixed at focal point

14 Solid lines measured performance with sizeMean fixed on left (S f ), processNum fixed on right (P f ) Predict throughput curve with sizeMean at S 1 by assuming constant ratio Throughput(processNum, sizeMean f ) Throughput(processNum, sizeMean 1 ) which is known at processNum P f in righthand graph

15 Accuracy of Predictions For SPARCstation + 1disk Measured at random points in parameter space. Error correlated to uniqueBytes

16 Comparisons

17 For Discussion Next Thursday (because of snow) Survey the types of workloads – especially the standard benchmarks – used in your proceedings (10 papers). www.cs.wisc.edu/~arch/www/tools.html www.cs.wisc.edu/~arch/www/tools.html is a great resource © 2003, Carla Ellis

18 Continued discussion of reinterpreting an experimental paper into strong inference model


Download ppt "© 2003, Carla Ellis Self-Scaling Benchmarks Peter Chen and David Patterson, A New Approach to I/O Performance Evaluation – Self-Scaling I/O Benchmarks,"

Similar presentations


Ads by Google