Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Sep 1, 2004 Lecture 3 (continuation of Lecture 2)

Similar presentations


Presentation on theme: "1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Sep 1, 2004 Lecture 3 (continuation of Lecture 2)"— Presentation transcript:

1 1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Sep 1, 2004 Lecture 3 (continuation of Lecture 2)

2 2Outline  Quantitative Principles of Computer Design Amdahl’s law (make the common case fast) Amdahl’s law (make the common case fast)  Performance Metrics MIPS, FLOPS, and all that… MIPS, FLOPS, and all that…  Examples

3 3 Example 1 (see HP3 pp. 42-45 for more examples) Which change is more effective on a certain machine: speeding up 10-fold the floating point square root operation only, which takes up 20% of execution time, or speeding up 2-fold all floating point operations, which take up 50% of total execution time? (Assume that the cost of accomplishing either change is the same, and the two changes are mutually exclusive.) Which change is more effective on a certain machine: speeding up 10-fold the floating point square root operation only, which takes up 20% of execution time, or speeding up 2-fold all floating point operations, which take up 50% of total execution time? (Assume that the cost of accomplishing either change is the same, and the two changes are mutually exclusive.) F sqrt = fraction of FP sqrt results R sqrt = rate of producing FP sqrt results F non-sqrt = fraction of non-sqrt results R non-sqrt = rate of producing non-sqrt results F fp = fraction of FP results R fp = rate of producing FP results F non-fp = fraction of non-FP results R non-fp = rate of producing non-FP results R before = average rate of producing results before enhancement R after = average rate of producing results after enhancement

4 4 Example 1 (Soln. using Amdahl’s Law) Improve FP sqrt only Improve all FP ops

5 5 Example 2 Which CPU performs better? Why?

6 6 Example 2 (Solution) If clock cycle time of A was only 1.1x clock cycle time of B, then CPU B would be about 9% higher performance.

7 7 Example 3 A LOAD/STORE machine has the characteristics shown below. We also observe that 25% of the ALU operations directly use a loaded value that is not used again. Thus we hope to improve things by adding new ALU instructions that have one source operand in memory. The CPI of the new instructions is 2. The only unpleasant consequence of this change is that the CPI of branch instructions will increase from 2 to 3. Overall, will CPU performance increase? A LOAD/STORE machine has the characteristics shown below. We also observe that 25% of the ALU operations directly use a loaded value that is not used again. Thus we hope to improve things by adding new ALU instructions that have one source operand in memory. The CPI of the new instructions is 2. The only unpleasant consequence of this change is that the CPI of branch instructions will increase from 2 to 3. Overall, will CPU performance increase?

8 8 Example 3 (Solution) Before change After change Since CPU time increases, change will not improve performance.

9 9 Example 4 A load-store machine has the characteristics shown below. An optimizing compiler for the machine discards 50% of the ALU operations, although it cannot reduce loads, stores, or branches. Assuming a 500 MHz (2 ns) clock, what is the MIPS rating for optimized code versus unoptimized code? Does the ranking of MIPS agree with the ranking of execution time? A load-store machine has the characteristics shown below. An optimizing compiler for the machine discards 50% of the ALU operations, although it cannot reduce loads, stores, or branches. Assuming a 500 MHz (2 ns) clock, what is the MIPS rating for optimized code versus unoptimized code? Does the ranking of MIPS agree with the ranking of execution time?

10 10 Example 4 (Solution) Without optimization With optimization Performance increases, but MIPS decreases!

11 11 Performance of (Blocking) Caches no cache misses! with cache misses!

12 12Example Assume we have a machine where the CPI is 2.0 when all memory accesses hit in the cache. The only data accesses are loads and stores, and these total 40% of the instructions. If the miss penalty is 25 clock cycles and the miss rate is 2%, how much faster would the machine be if all memory accesses were cache hits? Assume we have a machine where the CPI is 2.0 when all memory accesses hit in the cache. The only data accesses are loads and stores, and these total 40% of the instructions. If the miss penalty is 25 clock cycles and the miss rate is 2%, how much faster would the machine be if all memory accesses were cache hits? Why?

13 13Means

14 14 Weighted Means

15 15 Relations among Means Equality holds if and only if all the elements are identical.

16 16 Summarizing Computer Performance “Characterizing Computer Performance with a Single Number”, J. E. Smith, CACM, October 1988, pp. 1202-1206  The starting point is universally accepted “The time required to perform a specified amount of computation is the ultimate measure of computer performance” “The time required to perform a specified amount of computation is the ultimate measure of computer performance”  How should we summarize (reduce to a single number) the measured execution times (or measured performance values) of several benchmark programs?  Two required properties A single-number performance measure for a set of benchmarks expressed in units of time should be directly proportional to the total (weighted) time consumed by the benchmarks. A single-number performance measure for a set of benchmarks expressed in units of time should be directly proportional to the total (weighted) time consumed by the benchmarks. A single-number performance measure for a set of benchmarks expressed as a rate should be inversely proportional to the total (weighted) time consumed by the benchmarks. A single-number performance measure for a set of benchmarks expressed as a rate should be inversely proportional to the total (weighted) time consumed by the benchmarks.

17 17 Arithmetic Mean for Times Smaller is better for execution times

18 18 Harmonic Mean for Rates Larger is better for execution rates

19 19 Avoid the Geometric Mean  If benchmark execution times are normalized to some reference machine, and means of normalized execution times are computed, only the geometric mean gives consistent results no matter what the reference machine is (see Figure 1.17 in HP3, pg. 38) This has led to declaring the geometric mean as the preferred method of summarizing execution time (e.g., SPEC) This has led to declaring the geometric mean as the preferred method of summarizing execution time (e.g., SPEC)  Smith’s comments “The geometric mean does provide a consistent measure in this context, but it is consistently wrong.” “The geometric mean does provide a consistent measure in this context, but it is consistently wrong.” “If performance is to be normalized with respect to a specific machine, an aggregate performance measure such as total time or harmonic mean rate should be calculated before any normalizing is done. That is, benchmarks should not be individually normalized first.” “If performance is to be normalized with respect to a specific machine, an aggregate performance measure such as total time or harmonic mean rate should be calculated before any normalizing is done. That is, benchmarks should not be individually normalized first.”

20 20 Programs to Evaluate Performance  (Toy) Benchmarks 10-100 line program 10-100 line program  sieve, puzzle, quicksort  Synthetic Benchmarks Attempt to match average frequencies of real workloads Attempt to match average frequencies of real workloads  Whetstone, Dhrystone  Kernels Time-critical excerpts of real programs Time-critical excerpts of real programs  Livermore loops  Real programs  gcc, compress “The principle behind benchmarking is to model a real job mix with a smaller set of representative programs.” J. E. Smith

21 21 SPECSPEC: Std Perf Evaluation Corp SPEC  First round 1989 (SPEC CPU89) SPEC CPU89SPEC CPU89 10 programs yielding a single number 10 programs yielding a single number  Second round 1992 (SPEC CPU92) SPEC CPU92SPEC CPU92 SPECint92 (6 integer programs) and SPECfp92 (14 floating point programs) SPECint92 (6 integer programs) and SPECfp92 (14 floating point programs)  Compiler flags unlimited. March 93 of DEC 4000 Model 610: –spice: unix.c:/def=(sysv,has_bcopy,”bcopy(a,b,c)=memcpy(b,a,c)” –wave5: /ali=(all,dcom=nat)/ag=a/ur=4/ur=200 –nasa7: /norecu/ag=a/ur=4/ur2=200/lc=blas  Third round 1995 (SPEC CPU95) SPEC CPU95SPEC CPU95 Single flag setting for all programs; new set of programs (8 integer, 10 floating point) Single flag setting for all programs; new set of programs (8 integer, 10 floating point) Phased out in June 2000 Phased out in June 2000  SPEC CPU2000 released April 2000 SPEC CPU2000 SPEC CPU2000

22 22 SPEC95 Details  Reference machine Sun SPARCstation 10/40 Sun SPARCstation 10/40 128 MB memory 128 MB memory Sun SC 3.0.1 compilers Sun SC 3.0.1 compilers  Benchmarks larger than SPEC92 Larger code size Larger code size More memory activity More memory activity Minimal calls to library routines Minimal calls to library routines  Greater reproducibility of results Standardized build and run environment Standardized build and run environment Manual intervention forbidden Manual intervention forbidden Definitions of baseline tightened Definitions of baseline tightened  Multiple numbers SPECint_95base, SPECint_95, SPECfp_95base, SPECfp_95 SPECint_95base, SPECint_95, SPECfp_95base, SPECfp_95 Source: SPEC

23 23 Trends in Integer Performance Source: Microprocessor Report 13(17), 27 Dec 1999

24 24 Trends in Floating Point Performance Source: Microprocessor Report 13(17), 27 Dec 1999

25 25 SPEC95 Ratings of Processors Source: Microprocessor Report, 24 Apr 2000

26 26 SPEC95 vs SPEC CPU2000 Read “SPEC CPU2000: Measuring CPU Performance in the New Millennium”,SPEC CPU2000: Measuring CPU Performance in the New Millennium John L. Henning, Computer, July 2000, pages 28-35 Source: Microprocessor Report, 17 Apr 2000

27 27 SPEC CPU2000 Example  Baseline machine: Sun Ultra 5, 300 MHz UltraSPARC Iii, 256 KB L2  Running time ratios scaled by factor of 100 Reference score of baseline machine is 100 Reference score of baseline machine is 100 Reference time of 176.gcc should be 1100, not 110 Reference time of 176.gcc should be 1100, not 110  Example shows 667 MHz Alpha processor on both CINT2000 and CINT95 Source: Microprocessor Report, 17 Apr 2000

28 28 Performance Evaluation  Given sales is a function of performance relative to the competition… There’s a big investment in improving product as reported by performance summary There’s a big investment in improving product as reported by performance summary  Good products created when you have: Good benchmarks Good benchmarks Good ways to summarize performance Good ways to summarize performance  If benchmarks/summary inadequate, then choose between improving product for real programs vs. improving product to get more sales Sales almost always wins! Sales almost always wins!  Execution time is the measure of computer performance!  What about cost?

29 29 Cost of Integrated Circuits Dingwall’s Equation

30 30Explanations Second term in “Dies per wafer” corrects for the rectangular dies near the periphery of round wafers “Die yield” assumes a simple empirical model: defects are randomly distributed over the wafer, and yield is inversely proportional to the complexity of the fabrication process (indicated by  )  =3 for modern processes implies that cost of die is proportional to (Die area) 4

31 31 “Revised Model Reduces Cost Estimates”, Linley Gwennap, Microprocessor Report 10(4), 25 Mar 1996 Real World Examples  0.18-micron process standard, 0.11-micron available now  BiCMOS is dead  Silicon-on-Insulator (SOI) process in works

32 32 Moore’s Law  Historical context Predicting implications of technology scaling Predicting implications of technology scaling Makes over 25 predictions, and all of them have come true Makes over 25 predictions, and all of them have come true  Read the paper and find out these predictions!  Moore’s Law “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.” “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.” Based on extrapolation from five points! Based on extrapolation from five points! Later, more accurate formula Later, more accurate formula Technology scaling of integrated circuits following this trend has been driver of much economic productivity over last two decades Technology scaling of integrated circuits following this trend has been driver of much economic productivity over last two decades “Cramming More Components onto Integrated Circuits”, G. E. Moore, Electronics, pp. 114-117, April 1965

33 33 Moore’s Law in Action at Intel Source: Microprocessor Report 9(6), 8 May 1995

34 34 Moore’s Law At Risk? Source: Microprocessor Report, 24 Aug 1998

35 35 Where Do The Transistors Go? Source: Microprocessor Report, 24 Apr 2000  Logic contributes a (vanishingly) small fraction of the number of transistors  Memory (mostly on-chip cache) is the biggest fraction  Computing is free, communication is expensive

36 36 Chip Photographs Source: http://micro.magnet.fsu.edu/chipshots/index.html UltraSparcHP-PA 8000

37 37 Embedded Processors Source: Microprocessor Report, 17 Jan 2000  More new instruction sets introduced in 1999 than in PC market for last 15 years  Hot trends of 1999 Network processors Network processors Configurable cores Configurable cores VLIW-based processors VLIW-based processors  ARM unit sales now surpass 68K/Coldfire unit sales  Diversity of market supports wide range of performance, power, and cost

38 38 Power-Performance Tradeoff (Embedded) Source: Microprocessor Report, 17 Jan 2000 Used in some Palms


Download ppt "1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Sep 1, 2004 Lecture 3 (continuation of Lecture 2)"

Similar presentations


Ads by Google