Presentation is loading. Please wait.

Presentation is loading. Please wait.

Benchmarks Programs specifically chosen to measure performance Must reflect typical workload of the user Benchmark types Real applications Small benchmarks.

Similar presentations


Presentation on theme: "Benchmarks Programs specifically chosen to measure performance Must reflect typical workload of the user Benchmark types Real applications Small benchmarks."— Presentation transcript:

1 Benchmarks Programs specifically chosen to measure performance Must reflect typical workload of the user Benchmark types Real applications Small benchmarks Benchmark suites Synthetic benchmarks

2 Real Applications Workload: Set of programs a typical user runs day in and day out. To use these real applications for metrics is a direct way of comparing the execution time of the workload on two machines. Using real applications for metrics has certain restrictions: They are usually big Takes time to port to different machines Takes considerable time to execute Hard to observe the outcome of a certain improvement technique

3 Comparing & Summarizing Performance A is 100 times faster than B for program 1 B is 10 times faster than A for program 2 For total performance, arithmetic mean is used: Computer AComputer B Program 11 s100 s Program 21000 s100 s Total time1001 s200 s

4 Arithmetic Mean If each program, in the workload, are not run equal # times, then we have to use weighted arithmetic mean: weight Computer AComputer B Program 1 (seconds) 101100 Program 2 (seconds) 11000100 Weighted AM -?? Suppose that the program 1 runs 10 times as often as the program 2. Which machine is faster?

5 Small Benchmarks Small code segments which are common in many applications For example, loops with certain instruction mix for (j = 0; j<8; j++) S = S + A j  B i-j Good for architects and designers Since small code segments are easy to compile and simulate even by hand, designers use these kind of benchmarks while working on a novel machine Can be abused by compiler designers by introducing special-purpose optimizations targeted at specific benchmark.

6 Benchmark Suites SPEC (Standard Performance Evaluation Corporation) Non-profit organization that aims to produce "fair, impartial and meaningful benchmarks for computers” Began in 1989 - SPEC89 (CPU intensive) Companies agreed on a set of real programs and inputs which they hope reflect a typical user’s workload best. Valuable indicator of performance Can still be abused Updates are required as the applications and their workload change by time

7 SPEC Benchmark Sets CPU Performance (SPEC CPU2006) Graphics (SPECviewperf) High-performance computing (HPC2002, MPI200 7, OMP2001) Java server applications (jAppServer2004) a multi-tier benchmark for measuring the performance of Java 2 Enterprise Edition (J2EE) technology-based application servers. Mail systems (MAIL2001, SPECimap2003) Network File systems (SFS97_R1 (3.0)) Web servers (SPEC WEB99, SPEC WEB99 SSL) More information: http://www.spec.org/

8 SPECInt Integer Benchmarks NameDescription 400.perlbenchProgramming Language 401.bzip2Compression 403.gccC Compiler 429.mcfCombinatorial Optimization 445.gobmkArtificial Intelligence 456.hmmerSearch Gene Sequence 458.sjengArtificial Intelligence 462.libquantumPhysics / Quantum Computing 464.h264refVideo Compression 471.omnetppDiscrete Event Simulation 473.astarPath-finding Algorithms 483.xalancbmkXML Processing

9 SPECfp Floating Point Benchmarks NameType wupwiseQuantum chromodynamics swimShallow water model mgridmultigrid solver in 3D potential field appluParabolic/elliptic partial dif. equation mesaThree-dimensional graphics library galgelComputational fluid dynamics artImage recognition using neural nets equakeSeismic wave propagation simulation facerecImage recognition of faces ammpComputational chemistry lucasPrimality testing fma3dCrash simulation sixtrackHigh-energy nuclear physics acceleration design apsimeteorology; pollutant distribution

10 SPEC CPU2006 – Summarizing SPEC ratio: the execution time measurements are normalized by dividing the measured execution time by the execution time on a reference machine Sun Microsystems Fire V20z, which has an AMD Opteron 252 CPU, running at 2600 MHz. 164.gzip benchmark executes in 90.4 s. The reference time for this benchmark is 1400 s, benchmark is 1400/90.4 × 100 = 1548 (a unitless value) Performances of different programs in the suites are summarized using “geometric mean” of SPEC ratios.

11 Pentium III & Pentium 4

12 Comparing Pentium III and Pentium 4 RatioPentium IIIPentium 4 CINT2000/Clock rate in MHz0.470.36 CFP2000/Clock rate in MHz0.340.39 Implementation efficiency?

13 SPEC WEB99 SystemProcessor# of disk drivers # of CPUs # of networks Clock rate (GHz) Result 1550/1000Pentium III22212765 1650Pentium III3211.41810 2500Pentium III8241.133435 2550Pentium III1211.261454 2650Pentium 4 Xeon5243.065698 4600Pentium 4 Xeon10242.24615 6400/700Pentium III Xeon5440.74200 6600Pentium 4 Xeon MP84826700 8450/700Pentium III Xeon7880.78001

14 Power Consumption Concerns Performance studied at different levels: 1. Maximum power 2. Intermediate level that conserves battery life 3. Minimum power that maximizes battery life Intel Mobile Pentium & Pentium M: two available clock rates 1. Maximum 2. Reduced clock rate Pentium M @ 1.6/0.6 GHz Pentium 4-M @ 2.4/1.2 GHz Pentium III-M @ 1.2/0.8 GHz

15 Three Intel Mobile Processors

16 Energy Efficiency

17 Synthetic Benchmarks Artificial programs constructed to try to match the characteristics of a large set of program. Goal: Create a single benchmark program where the execution frequency of instructions in the benchmark simulates the instruction frequency in a large set of benchmarks. Examples: Dhrystone, Whetstone They are not real programs Compiler and hardware optimizations can inflate the improvement far beyond what the same optimization would do with real programs

18 Amdahl’s Law in Computing Improving one aspect of a machine by a factor of n does not improve the overall performance by the same amount. Speedup = (Performance after imp.) / (Performance before imp.) Speedup = (Execution time before imp.)/ (Execution time after imp.) Execution Time After Improvement = Execution Time Unaffected + (Execution Time Affected/n)

19 Amdahl’s Law Example: Suppose a program runs in 100 s on a machine, with multiplication responsible for 80 s of this time. How much do we have to improve the speed of multiplication if we want the program to run 4 times faster? Can we improve the performance by a factor 5?

20 Amdahl’s Law The performance enhancement possible due to a given improvement is limited by the amount that the improved feature is used. In previous example, it makes sense to improve multiplication since it takes 80% of all execution time. But after certain improvement is done, the further effort to optimize the multiplication more will yield insignificant improvement. Law of Diminishing Returns A corollary to Amdahl’s Law is to make a common case faster.

21 Examples Suppose we enhance a machine making all floating-point instructions run five times faster. If the execution time of some benchmark before the floating-point enhancement is 10 seconds, what will the speedup be if half of the 10 seconds is spent executing floating-point instructions? We are looking for a benchmark to show off the new floating-point unit described above, and want the overall benchmark to show a speedup of 3. One benchmark we are considering runs for 90 seconds with the old floating-point hardware. How much of the execution time would floating- point instructions have to account for in this program in order to yield our desired speedup on this benchmark?

22 Remember Total execution time is a consistent summary of performance Execution Time = (IC  CPI)/f For a given architecture, performance increases come from: 1. increases in clock rate (without too much adverse CPI effects) 2. improvements in processor organization that lower CPI 3. compiler enhancements that lower CPI and/or IC


Download ppt "Benchmarks Programs specifically chosen to measure performance Must reflect typical workload of the user Benchmark types Real applications Small benchmarks."

Similar presentations


Ads by Google