Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative.

Similar presentations


Presentation on theme: "1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative."— Presentation transcript:

1 1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative Approach, Fifth Edition

2 2 Copyright © 2012, Elsevier Inc. All rights reserved. Computer Technology Performance improvements: Improvements in semiconductor technology Feature size, clock speed Improvements in computer architectures Enabled by High Level Language (HLL) compilers, UNIX Lead to RISC architectures Together have enabled: Lightweight computers Productivity-based managed/interpreted programming languages Introduction

3 3 Copyright © 2012, Elsevier Inc. All rights reserved. Single Processor Performance Introduction RISC Move to multi-processor

4 4 4 Moore’ Law Exponential Growth – doubling of transistors every couple of years

5 5 5 Do you want to be a millionaire? You double your investment everyday Starting investment - one cent. How long it takes to become a millionaire? a) 20 days b) 27 days c) 37 days d) 365 days e) Lifetime ++

6 6 6 Do you want to be a millionaire? You double your investment everyday Starting investment - one cent. How long it takes to become a millionaire a) 20 days One million cents b) 27 days Millionaire c) 37 days Billionaire Doubling transistors every 18 months This growth rate is hard to imagine

7 7 Copyright © 2012, Elsevier Inc. All rights reserved. Current Trends in Architecture Cannot continue to leverage Instruction-Level parallelism (ILP) Single processor performance improvement ended in 2003 New models for performance: Data-level parallelism (DLP) Thread-level parallelism (TLP) Request-level parallelism (RLP) These require explicit restructuring of the application Introduction

8 8 Copyright © 2012, Elsevier Inc. All rights reserved. Classes of Computers Personal Mobile Device (PMD) e.g. start phones, tablet computers Emphasis on energy efficiency and real-time Desktop Computing Emphasis on price-performance Servers Emphasis on availability, scalability, throughput Clusters / Warehouse Scale Computers Used for “Software as a Service (SaaS)” Emphasis on availability and price-performance Sub-class: Supercomputers, emphasis: floating-point performance and fast internal networks Embedded Computers Emphasis: price Classes of Computers

9 9 Copyright © 2012, Elsevier Inc. All rights reserved. Parallelism Classes of parallelism in applications: Data-Level Parallelism (DLP) Task-Level Parallelism (TLP) Classes of architectural parallelism: Instruction-Level Parallelism (ILP) Vector architectures/Graphic Processor Units (GPUs) Thread-Level Parallelism Request-Level Parallelism Classes of Computers

10 10 Copyright © 2012, Elsevier Inc. All rights reserved. Flynn’s Taxonomy Single instruction stream, single data stream (SISD) Single instruction stream, multiple data streams (SIMD) Vector architectures Multimedia extensions Graphics processor units Multiple instruction streams, single data stream (MISD) No commercial implementation Multiple instruction streams, multiple data streams (MIMD) Tightly-coupled MIMD Loosely-coupled MIMD Classes of Computers

11 11 Copyright © 2012, Elsevier Inc. All rights reserved. Defining Computer Architecture “Old” view of computer architecture: Instruction Set Architecture (ISA) design i.e. decisions regarding: registers, memory addressing, addressing modes, instruction operands, available operations, control flow instructions, instruction encoding “Real” computer architecture: Specific requirements of the target machine Design to maximize performance within constraints: cost, power, and availability Includes ISA, microarchitecture, hardware Defining Computer Architecture

12 12 Correlations To Other Fields Copyright © 2012, Elsevier Inc. All rights reserved. Applications – HTML/XML, Audio, Video, Data Compression, and many others. Compilers – Gcc, Intel C++, Visual C++, C#, Java, etc. Operating Systems – Linux, Windows, Unix, etc. Computer Architecture – Instruction Set, Memory Hierarchy, Parallelism, Power efficient design, etc. Circuits and Physical Devices. - Advances or demands from one field will drive other fields.

13 13 Copyright © 2012, Elsevier Inc. All rights reserved. Trends in Technology Integrated circuit technology Transistor density: 35%/year Die size: 10-20%/year Integration overall: 40-55%/year DRAM capacity: 25-40%/year (slowing) Flash capacity: 50-60%/year 15-20X cheaper/bit than DRAM Magnetic disk technology: 40%/year 15-25X cheaper/bit then Flash 300-500X cheaper/bit than DRAM Trends in Technology

14 14 Copyright © 2012, Elsevier Inc. All rights reserved. Bandwidth and Latency Bandwidth or throughput Total work done in a given time 10,000-25,000X improvement for processors 300-1200X improvement for memory and disks Latency or response time Time between start and completion of an event 30-80X improvement for processors 6-8X improvement for memory and disks Trends in Technology

15 15 Copyright © 2012, Elsevier Inc. All rights reserved. Bandwidth and Latency Log-log plot of bandwidth and latency milestones Trends in Technology

16 16 Copyright © 2012, Elsevier Inc. All rights reserved. Transistors and Wires Feature size Minimum size of transistor or wire in x or y dimension 10 microns in 1971 to.032 microns in 2011 Transistor performance scales linearly Wire delay does not improve with feature size! Integration density scales quadratically Trends in Technology

17 17 Copyright © 2012, Elsevier Inc. All rights reserved. Power and Energy Problem: Get power in, get power out Thermal Design Power (TDP) Characterizes sustained power consumption Used as target for power supply and cooling system Lower than peak power, higher than average power consumption Clock rate can be reduced dynamically to limit power consumption Energy per task is often a better measurement Trends in Power and Energy

18 18 Copyright © 2012, Elsevier Inc. All rights reserved. Dynamic Energy and Power Dynamic energy Transistor switch from 0 -> 1 or 1 -> 0 ½ x Capacitive load x Voltage 2 Dynamic power ½ x Capacitive load x Voltage 2 x Frequency switched Reducing clock rate reduces power, not energy Trends in Power and Energy

19 19 Copyright © 2012, Elsevier Inc. All rights reserved. Power Intel 80386 consumed ~ 2 W 3.3 GHz Intel Core i7 consumes 130 W Heat must be dissipated from 1.5 x 1.5 cm chip This is the limit of what can be cooled by air Trends in Power and Energy

20 20 Copyright © 2012, Elsevier Inc. All rights reserved. Reducing Power Techniques for reducing power: Do nothing well Dynamic Voltage-Frequency Scaling Low power state for DRAM, disks Overclocking, turning off cores Trends in Power and Energy

21 21 Copyright © 2012, Elsevier Inc. All rights reserved. Static Power Static power consumption Current static x Voltage Scales with number of transistors To reduce: power gating – turning off the power supply to idle circuits to reduce leakage. Trends in Power and Energy

22 22 Copyright © 2012, Elsevier Inc. All rights reserved. Trends in Cost Cost driven down by learning curve Yield DRAM: price closely tracks cost Microprocessors: price depends on volume 10% less for each doubling of volume Trends in Cost

23 23 Copyright © 2012, Elsevier Inc. All rights reserved. Integrated Circuit Cost Integrated circuit Bose-Einstein formula: Defects per unit area = 0.016-0.057 defects per square cm (2010) N = process-complexity factor = 11.5-15.5 (40 nm, 2010) Trends in Cost

24 24 Copyright © 2012, Elsevier Inc. All rights reserved. Dependability Module reliability Mean time to failure (MTTF) Mean time to repair (MTTR) Mean time between failures (MTBF) = MTTF + MTTR Availability = MTTF / MTBF Dependability

25 25 Copyright © 2012, Elsevier Inc. All rights reserved. Measuring Performance Typical performance metrics: Response time Throughput Speedup of X relative to Y Execution time Y / Execution time X Execution time Wall clock time: includes all system overheads CPU time: only computation time Benchmarks Kernels (e.g. matrix multiply) Toy programs (e.g. sorting) Synthetic benchmarks (e.g. Dhrystone) Benchmark suites (e.g. SPEC06fp, TPC-C) Measuring Performance

26 26 Benchmark Suites Desktop SPEC CPU2006: 12 integer, 17 floating-point SPECviewperf, SPECapc: graphics benchmarks Server SPEC CPU2006: running multiple copies, SPECrate SPECSFS: for NFS performance SPECWeb: Web server benchmark TPC-x: measure transaction-processing, queries, and decision making database applications Embedded Processor New area EEMBC: EDN Embedded Microprocessor Benchmark Consortium

27 27 SPEC2006 Programs and the Evolution of the SPEC Benchmarks

28 28 Comparing Performance Arithmetic Mean: Weighted Arithmetic Mean: Geometric Mean: Execution time ratio is normalized to a base machine Is used to figure out SPECrate

29 29 Comparing Performance Arithmetic Mean: For Program P1 and P2: B is 9.1 times faster than A C is 25 times faster than A C is 2.75 times faster than B

30 30 Comparing Performance Weighted Arithmetic Mean: Different conclusions can be obtained from different weight

31 31 Amdahl’s Law Where: f is a fraction of the execution time that can be enhanced n is the enhancement factor Example: f =.4, n = 10 => Speedup = 1.56 Total Speedup is limited if only a portion can be enhanced.  Performance improvement to be gained from using some faster mode of execution is limited by the fraction of the time the faster mode can be used.

32 32 Amdahl’s Law - example Amdahl’s law is useful for comparing overall performance of two design alternatives. Example: Floating-point (FP) operations consume 50% of the execution time of a graphics application. FP square root (FPSQRT) is used 20% of the time. Improve FPSQR operation execution by 10 times Speedup = 1 / ((1-0.2) + 0.2/10) = 1.22 Improve all FP operations by 1.6 times Speedup = 1 / ((1-0.5) + 0.5/1.6) = 1.23 Improving the performance of the FP operations overall is slightly better because of the higher frequency.

33 33 CPU Performance Equation Clock Cycle Time: Hardware technology and organization CPI: Organization and Instruction Set Architecture (ISA) Instruction Count: ISA and compiler technology We will focus more on the organization issues Many performance enhancing techniques improves one with small/predictable impacts on the other two.

34 34 Example Parameters: FP operations (including FPSQR) = 25% CPI for FP operations = 4; CPI for others = 1.33 Frequency of FPSQR = 2%; CPI of FPSQR = 20 Compare 2 designs: Decrease CPI of FPSQR to 2, or CPI of all FP to 2.5

35 35 Principle of Locality The most important program property Programs tend to reuse data and instructions they have used recently. A rule of thumb: a program spends 90% of its execution time in only 10% of the code. Predict program’s action in the near future based on its accesses in the recent past. Temporal locality: recently accessed items are likely to be accessed in the near future. Spatial locality: items whose addresses are near one another tend to be referenced close together in time.

36 36 Copyright © 2012, Elsevier Inc. All rights reserved. Principles of Computer Design Take Advantage of Parallelism e.g. multiple processors, disks, memory banks, pipelining, multiple functional units Principle of Locality Reuse of data and instructions Focus on the Common Case Amdahl’s Law Principles

37 37 Copyright © 2012, Elsevier Inc. All rights reserved. Principles of Computer Design The Processor Performance Equation Principles

38 38 Copyright © 2012, Elsevier Inc. All rights reserved. Principles of Computer Design Principles Different instruction types having different CPIs

39 39 Copyright © 2011, Elsevier Inc. All rights Reserved. Figure 1.13 Photograph of an Intel Core i7 microprocessor die, which is evaluated in Chapters 2 through 5. The dimensions are 18.9 mm by 13.6 mm (257 mm2) in a 45 nm process. (Courtesy Intel.)

40 40 Copyright © 2011, Elsevier Inc. All rights Reserved. Figure 1.14 Floorplan of Core i7 die in Figure 1.13 on left with close-up of floorplan of second core on right.

41 41 Performance, Price-Performance (SPEC)

42 42 Performance, Price-Performance (TPC-C)

43 43 Misc. Items Check SPEC web site for more information, http://www.spec.orghttp://www.spec.org Read Fallacies and Pitfalls For example, Fallacy MIPS is an accurate measure for comparing performance among computers is a Fallacy MIPS is dependent on the instruction set. Difficult to compare MIPS of computers with different instruction sets. MIPS is dependent on the instruction set. Difficult to compare MIPS of computers with different instruction sets. MIPS varies between programs on the same computer. MIPS varies between programs on the same computer. MIPS can vary inversely to performance!! (consider a machine with floating point hardware vs. software floating point routings) MIPS can vary inversely to performance!! (consider a machine with floating point hardware vs. software floating point routings)

44 44 Example Using MIPS Instruction distribution: ALU: 43%, 1 cycle/inst Load: 21%, 2 cycle/inst Store: 12%, 2 cycle/inst Branch: 24%, 2 cycle/inst Optimization compiler reduces 50% of ALU


Download ppt "1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative."

Similar presentations


Ads by Google