Presentation is loading. Please wait.

Presentation is loading. Please wait.

Course Introduction and Overview

Similar presentations


Presentation on theme: "Course Introduction and Overview"— Presentation transcript:

1 Course Introduction and Overview
Pradondet Nilagupta Spring 2005 (original notes from Prof. Randy Katz, Prof. Dan Connors, Prof. Amirali Baniasadi) Digital System Architecture September 11, 2018

2 204521 Digital System Architecture
References Textbook Main Textbook (required) Computer Architecture a Quantitative Approach 3rd Edition, John L. Hennessy, David A. Patterson, Morgan Kaufmann 2003.  Supplement Text Advance Computer Architectures A design Space Approach, Dezso Sima, Terence Fountain, Peter Kacsuk, Addison-Wesley, 1997 Computer System Design and Architecture, Vincent P. Heuring, Harry F. Jordan, Addison-Wesley 1997. Computer Organization, V. Carl Hamacher, Zvonko G. Vranesic, Safwat G. Zaky,, McGraw-Hill, 1996. Digital System Architecture September 11, 2018

3 204521 Digital System Architecture
Grading 25% Homeworks 30% MidtermExam 30% Final Exam 15% Paper Presentation Digital System Architecture September 11, 2018

4 204521 Digital System Architecture
Topic Coverage Fundamentals of Computer Design (Chapter 1) Instruction Set Principle (Chapter 2) Pipelining: Basic and Intermediate Concepts ( Appendix A) Instructional Level Parallelism (Chapter 3, 4) Memory Hierarchy Design(Chapter 5) Storage System (Chapter 7) Computer Arithmetic (Appendix H) Vector Processors (Appendix G) Digital System Architecture September 11, 2018

5 204521 Digital System Architecture
Related Courses Digital & Org. Strong Prerequisite Comp. Arch. Parallel Why, Analysis, Evaluation Parallel Architectures, Languages, Systems How to build it Implementation details Basic knowledge of the organization of a computer is assumed! Digital System Architecture September 11, 2018

6 Measurement & Evaluation
Course Focus (1/2) To Understand the design techniques, machine structures, technology factors, evaluation methods that will determine the form of computers in 21st Century Parallelism Technology Programming Languages Applications Interface Design (Inst. Set Arch.) Computer Architecture: • Instruction Set Design • Organization • Hardware Operating Measurement & Evaluation History Systems Digital System Architecture September 11, 2018

7 Computer Architecture Is …
the attributes of a [computing] system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls the logic design, and the physical implementation. Amdahl, Blaaw, and Brooks, 1964 SOFTWARE Digital System Architecture September 11, 2018

8 Computer Architecture’s Changing Definition
1950s to 1960s: Computer Architecture Course: Computer Arithmetic 1970s to mid 1980s: Computer Architecture Course: Instruction Set Design, especially ISA appropriate for compilers 1990s: Computer Architecture Course: Design of CPU, memory system, I/O system, Multiprocessors, Networks 2010s: Computer Architecture Course: Self adapting systems? Self organizing structures? DNA Systems/Quantum Computing? Digital System Architecture September 11, 2018

9 Computer Architecture Topics (1/2)
Input/Output and Storage Disks, WORM, Tape RAID Emerging Technologies Interleaving Bus protocols DRAM Coherence, Bandwidth, Latency Memory Hierarchy L2 Cache L1 Cache Addressing, Protection, Exception Handling VLSI Instruction Set Architecture Pipelining, Hazard Resolution, Superscalar, Reordering, Prediction, Speculation Pipelining and Instruction Level Parallelism Digital System Architecture September 11, 2018

10 Computer Architecture Topics (2/2)
P M P M P M P M S Interconnection Network Processor-Memory-Switch Topologies, Routing, Bandwidth, Latency, Reliability Multiprocessors Networks and Interconnections Digital System Architecture September 11, 2018

11 204521 Digital System Architecture
Throughout this text we will focus on optimizing machine cost per performance Digital System Architecture September 11, 2018

12 Computer Architecture
Role of a computer architect: To design and engineer the various levels of a computer system to maximize performance and programmability within limits of technology and cost Digital System Architecture September 11, 2018

13 204521 Digital System Architecture
Levels of Abstraction S/W and H/W consists of hierarchical layers of abstraction, each hides details of lower layers from the above layer The instruction set arch. abstracts the H/W and S/W interface and allows many implementation of varying cost and performance to run the same S/W Digital System Architecture September 11, 2018

14 The Task of Computer Designer
determine what attribute are important for a new machine design a machine to maximize cost performance What are these Task? instruction set design function organization logic design implementation IC design, packaging, power, cooling…. Digital System Architecture September 11, 2018

15 Instruction Set Architecture (ISA)
refer to actual programmer visible instruction set serve as the boundary between software and hardware must be designed to survive changes in hardware technology, software technology, and application characteristic. i.e. 80xx, 68xxx,80x86 Digital System Architecture September 11, 2018

16 Instruction Set Architecture (ISA)
Complete set of instructions used by a machine ISA: Abstract interface between the HW and lowest-level SW. It encompasses information needed to write machine-language programs including Instructions Memory size Registers used . . . Digital System Architecture September 11, 2018

17 Instruction Set Architecture (ISA)
Instruction Execution Cycle Fetch Instruction From Memory Decode Instruction determine its size & action Fetch Operand data Execute instruction & compute results or status Store Result in memory Determine Next Instruction’s address Digital System Architecture September 11, 2018

18 Organization and Hardware (1/2)
organization includes high-level aspect of computer design such as memory system bus structure internal CPU arithmetic, logic, branch, data transfer are implemented i.e.. SPARC2, SPARC 20 has same instruction set but different organization Digital System Architecture September 11, 2018

19 Organization and Hardware (2/2)
Hardware used to refer to specific of a machine detailed logic design packaging technology of machine machine identical ISA and nearly identical organization but they differs in detailed hardware implementation Digital System Architecture September 11, 2018

20 Choosing between 2 designs
What should the computer architect aware of in choosing between two designs? design complexity complex design take longer to complete, this means a design will need to have higher performance to be competitive design time both hardware and software Digital System Architecture September 11, 2018

21 204521 Digital System Architecture
Early Computing 1946: ENIAC, us Army, 18,000 Vacuum Tubes 1949: UNIVAC I, $250K, 48 systems sold 1954: IBM 701, Core Memory 1957: Moving Head Disk 1958: Transistor, FORTRAN, ALGOL, CDC & DEC Founded 1964: IBM 360, CDC 6600, DEC PDP-8 1969: UNIX 1970: FLOPPY DISK 1981: IBM PC, 1st Successful Portable (Osborne1) 1986: Connection Machine, MAX Headroom Debut Digital System Architecture September 11, 2018

22 Underlying Technologies
Year Logic Storage Prog. Lang. O/S 54 Tubes core (8 ms) 58 Transistor (10s) FORTRAN 60 ALGOL, COBOL Batch 64 Hybrid (1 s) thin film (200ns) Lisp, APL, Basic 66 IC (100ns) PL/1, Simula,C 67 Multiprog. 71 LSI (10ns) 1k DRAM O.O. V.M. 73 (8-bit P) 75 (16-bit P) 4k DRAM 78 VLSI (10ns) 16k DRAM Networks 80 64k DRAM 84 (32-bit P) 256k DRAM ADA 87 ULSI 1M DRAM 89 GAs 4M DRAM C++ 92 (64-bit P) 16M DRAM Fortran90 Generation Evolutionary Parallelism Digital System Architecture September 11, 2018

23 204521 Digital System Architecture
History 1. “Big Iron” Computers: Used vacuum tubes, electric relays and bulk magnetic storage devices. No microprocessors. No memory. Example: ENIAC (1945), IBM Mark 1 (1944) Digital System Architecture September 11, 2018

24 204521 Digital System Architecture
History Von Newmann: Invented EDSAC (1949). First Stored Program Computer. Uses Memory. Importance: We are still using The same basic design. Digital System Architecture September 11, 2018

25 The Von Neumann Computer Model
Partitioning of the computing engine into components: Central Processing Unit (CPU): Control Unit (instruction decode , sequencing of operations), Datapath (registers, arithmetic and logic unit, buses). Memory: Instruction and operand storage. Input/Output (I/O) sub-system: I/O bus, interfaces, devices. The stored program concept: Instructions from an instruction set are fetched from a common memory and executed one at a time - Memory (instructions, data) Control Datapath registers ALU, buses CPU Computer System Input Output I/O Devices Major CPU Performance Limitation: The Von Neumann computing model implies sequential execution one instruction at a time Digital System Architecture September 11, 2018

26 Hardware Components of Any Computer
Five classic components of all computers: 1. Control Unit; 2. Datapath; 3. Memory; 4. Input; 5. Output } Processor I/O Processor (active) Computer Control Unit Datapath Memory (passive) (where programs, data live when running) Devices Input Output Keyboard, Mouse, etc. Display, Printer, etc. Disks Networks ... Digital System Architecture September 11, 2018

27 Generic CPU Machine Instruction Execution Steps
(Implied by The Von Neumann Computer Model) Instruction Fetch Decode Operand Execute Result Store Next Obtain instruction from program storage Determine required actions and instruction size Locate and obtain operand data Compute result value or status Deposit results in storage for later use Determine successor or next instruction Major CPU Performance Limitation: The Von Neumann computing model implies sequential execution one instruction at a time Digital System Architecture September 11, 2018

28 Computer Components Datapath of a von Newman machine Bus OP1 + OP2 ...
General-purpose Registers ALU i/p registers Op1 Op2 Bus ALU ALU o/p register OP1 + OP2 Digital System Architecture September 11, 2018

29 204521 Digital System Architecture
Computer Components Processor(CPU): Active part of the motherboard Performs calculations & activates devices Gets instruction & data from memory Components are connected via Buses Bus: Collection of parallel wires Transmits data, instructions, or control signals Motherboard Physical chips for I/O connections, memory, & CPU Digital System Architecture September 11, 2018

30 204521 Digital System Architecture
Computer Components CPU consists of Datapath (ALU+ Registers): Performs arithmetic & logical operations Control (CU): Controls the data path, memory, & I/O devices Sends signals that determine operations of datapath, memory, input & output Digital System Architecture September 11, 2018

31 204521 Digital System Architecture
Technology Change Technology changes rapidly HW Vacuum tubes: Electron emitting devices Transistors: On-off switches controlled by electricity Integrated Circuits( IC/ Chips): Combines thousands of transistors Very Large-Scale Integration( VLSI): Combines millions of transistors What next? SW Machine language: Zeros and ones Assembly language: Mnemonics High-Level Languages: English-like Artificial Intelligence languages: Functions & logic predicates Object-Oriented Programming: Objects & operations on objects Digital System Architecture September 11, 2018

32 Technology => dramatic change
Processor logic capacity: about 30% per year clock rate: about 20% per year Memory DRAM capacity: about 60% per year (4x every 3 years) Memory speed: about 10% per year Cost per bit: improves about 25% per year Disk capacity: about 60% per year Question: Does every thing look OK? Digital System Architecture September 11, 2018

33 204521 Digital System Architecture
Software Evolution. Machine language Assembly language High-level languages Subroutine libraries There is a large gap between what is convenient for computers & what is convenient for humans Translation/Interpretation is needed between both Digital System Architecture September 11, 2018

34 Language Evolution swap (int v[], int k) { int temp temp = v[k];
v[k] = v[k+1]; v[k+1] = temp; } High-level language program (in C) swap: muli $2, $5, 4 add $2, $4, $2 lw $15, 0($2) lw $18, 4($2) sw $18, 0($2) sw $15, 4($2) jr $31 Assembly language program (for MIPS) 1 Binary machine language program (for MIPS) Digital System Architecture September 11, 2018

35 204521 Digital System Architecture
HW - SW Components Hardware Memory components Registers Register file memory Disks Functional components Adder, multiplier, dividers, . . . Comparators Control signals Software Data Simple Characters Integers Floating-point Pointers Structured Arrays Structures ( records) Instructions Data transfer Arithmetic Shift Control flow Comparison . . . Digital System Architecture September 11, 2018

36 Predictions for the Late 1990s (1/2)
Technology Very large dynamic RAM: 64 MBits and beyond Large fast Static RAM: 1 MB, 10ns Complete systems on a chip 10+ Million Transistors Parallelism Superscalar, Superpipeline, Vector, Multiprocessors Processor Arrays Digital System Architecture September 11, 2018

37 Predictions for the Late 1990s (2/2)
Low Power 50% of PCs portable by 1995 Performance per watt Parallel I/O Many applications I/O limited, not computation Computation scaling, but memory, I/O bandwidth not keeping pace Multimedia New interface technologies Video, speech, handwriting, virtual reality, … Digital System Architecture September 11, 2018

38 204521 Digital System Architecture
Moore’s Law Digital System Architecture September 11, 2018

39 Technology => dramatic change
Processor logic capacity: about 30% increase per year clock rate: about 20% increase per year Memory DRAM capacity: about 60% increase per year (4x every 3 years) Memory speed: about 10% increase per year Cost per bit: about 25% improvement per year Disk Capacity: about 60% increase per year Higher logic density gave room for instruction pipeline & cache Performance optimization no longer implies smaller programs Computers became lighter and more power efficient Digital System Architecture September 11, 2018

40 204521 Digital System Architecture
Dead Computer Society ACRI Alliant American Supercomputer Ametek Applied Dynamics Astronautics BBN CDC Convex Cray Computer Cray Research Culler-Harris Culler Scientific Cydrome Dana/Ardent/Stellar/Stardent Denelcor Elxsi ETA Systems Evans and Sutherland Computer Division Gould NPL Guiltech Intel Scientific Computers Gould NPL Guiltech Intel Scientific Computers International Parallel Machines Kendall Square Research Key Computer Laboratories MasPar Meiko Multiflow Myrias Numerix Prisma Thinking Machines Saxpy Scientific Computer Systems (SCS) Soviet Supercomputers Supertek Supercomputer Systems (SSI) Suprenum Digital System Architecture September 11, 2018

41 204521 Digital System Architecture
The Processor Chip Digital System Architecture September 11, 2018

42 204521 Digital System Architecture
Intel 4004 Die Photo Introduced in 1970 First microprocessor 2,250 transistors 12 mm2 108 KHz Digital System Architecture September 11, 2018

43 204521 Digital System Architecture
Intel 8086 Die Scan 29,0000 transistors 33 mm2 5 MHz Introduced in 1979 Basic architecture of the IA32 PC Digital System Architecture September 11, 2018

44 204521 Digital System Architecture
Intel Die Scan 1,200,000 transistors 81 mm2 25 MHz Introduced in 1989 1st pipelined implementation of IA32 Digital System Architecture September 11, 2018

45 204521 Digital System Architecture
Pentium Die Photo 3,100,000 transistors 296 mm2 60 MHz Introduced in 1993 1st superscalar implementation of IA32 Digital System Architecture September 11, 2018

46 204521 Digital System Architecture
Pentium III 9,5000,000 transistors 125 mm2 450 MHz Introduced in 1999 Digital System Architecture September 11, 2018

47 204521 Digital System Architecture
MOORE’s LAW Processor-DRAM Memory Gap (latency) µProc 60%/yr. (2X/1.5yr) 1000 CPU “Moore’s Law” 100 Processor-Memory Performance Gap: (grows 50% / year) Performance 10 DRAM 9%/yr. (2X/10 yrs) DRAM Y-axis is performance X-axis is time Latency Cliché: Not e that x86 didn’t have cache on chip until 1989 1 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Time Digital System Architecture September 11, 2018

48 Moore’s Not-Exactly-Law
Not a law of nature But fairly accurate over 38 years and counting No exponential is forever but we can delay “forever” (Gordon Moore in 2003) More about Moore’s Law at Digital System Architecture September 11, 2018

49 Its all about money. (Trends affect profits, costs, etc.)
Digital System Architecture September 11, 2018

50 204521 Digital System Architecture
Performance Trend In general, tradeoffs should improve performance The natural idea here… HW cheaper, easier to manufacture  can make our processor do more things… Digital System Architecture September 11, 2018

51 Price Trends (Pentium III)
Digital System Architecture September 11, 2018

52 Price Trends (DRAM memory)
Digital System Architecture September 11, 2018

53 Computer Engineering Methodology
Technology Trends Digital System Architecture September 11, 2018

54 Computer Engineering Methodology
Evaluate Existing Systems for Bottlenecks Benchmarks Technology Trends Digital System Architecture September 11, 2018

55 Computer Engineering Methodology
Evaluate Existing Systems for Bottlenecks Benchmarks Technology Trends Simulate New Designs and Organizations Workloads Digital System Architecture September 11, 2018

56 Computer Engineering Methodology
Evaluate Existing Systems for Bottlenecks Implementation Complexity Benchmarks Technology Trends Implement Next Generation System Simulate New Designs and Organizations Workloads Digital System Architecture September 11, 2018

57 Context for Designing New Architectures
Application Area Special Purpose (e.g., DSP) / General Purpose Scientific (FP intensive) / Commercial Level of Software Compatibility Object Code/Binary Compatible (cost HW vs. SW) Assembly Language (dream to be different from binary) Programming Language; Why not? Digital System Architecture September 11, 2018

58 Context for Designing New Architectures
Operating System Requirements for General Purpose Applications Size of Address Space Memory Management/Protection Context Switch Interrupts and Traps Standards: Innovation vs. Competition IEEE 754 Floating Point I/O Bus Networks Operating Systems / Programming Languages Digital System Architecture September 11, 2018

59 Recent Trends in Computer Design
The cost/performance ratio of computing systems have seen a steady decline due to advances in: Integrated circuit technology: decreasing feature size,  Clock rate improves roughly proportional to improvement in  Number of transistors improves proportional to (or faster). Architectural improvements in CPU design. Microprocessor systems directly reflect IC improvement in terms of a yearly 35 to 55% improvement in performance. Assembly language has been mostly eliminated and replaced by other alternatives such as C or C++ Standard operating Systems (UNIX, NT) lowered the cost of introducing new architectures. Emergence of RISC architectures and RISC-core (x86) architectures. Adoption of quantitative approaches to computer design based on empirical performance observations. Digital System Architecture September 11, 2018

60 Computer Performance and Cost
Pradondet Nilagupta Fall 2005 (original notes from Randy Katz, & Prof. Jan M. Rabaey , UC Berkeley Prof. Shaaban, RIT) Digital System Architecture September 11, 2018

61 Review: What is Computer Architecture?
Interfaces API ISA Link I/O Chan Technology Machine Organization Regs IR Applications Measurement & Evaluation Computer Architect Digital System Architecture September 11, 2018

62 Review: Computer System Components
CPU Core 1 GHz GHz 4-way Superscaler RISC or RISC-core (x86): Deep Instruction Pipelines Dynamic scheduling Multiple FP, integer FUs Dynamic branch prediction Hardware speculation All Non-blocking caches L K way set associative (on chip), separate or unified L K- 2M way set associative (on chip) unified L M way set associative (off or on chip) unified L1 L2 L3 CPU Examples: Alpha, AMD K7: EV6, MHz Intel PII, PIII: GTL MHz Intel P MHz SDRAM PC100/PC133 MHZ bits wide 2-way inteleaved ~ 900 MBYTES/SEC )64bit) Double Date Rate (DDR) SDRAM PC3200 200 MHZ DDR 4-way interleaved ~3.2 GBYTES/SEC (one 64bit channel) ~6.4 GBYTES/SEC (two 64bit channels) RAMbus DRAM (RDRAM) 400MHZ DDR 16 bits wide (32 banks) ~ 1.6 GBYTES/SEC Caches Front Side Bus (FSB) Off or On-chip adapters I/O Buses Memory Controller Current Standard Example: PCI, MHz 32-64 bits wide MBYTES/SEC PCI-X 133MHz 64 bit 1024 MBYTES/SEC NICs Memory Bus Controllers Memory Disks Displays Keyboards Networks I/O Devices: North Bridge South Bridge I/O Subsystem Chipset Digital System Architecture September 11, 2018

63 The Architecture Process
Estimate Cost & Performance Sort New concepts created Good ideas Mediocre ideas Bad ideas Digital System Architecture September 11, 2018

64 Performance Measurement and Evaluation
Many dimensions to computer performance CPU execution time by instruction or sequence floating point integer branch performance Cache bandwidth Main memory bandwidth I/O performance bandwidth seeks pixels or polygons per second Relative importance depends on applications P C M Digital System Architecture September 11, 2018

65 204521 Digital System Architecture
Evaluation Tools Benchmarks, traces, & mixes macrobenchmarks & suites application execution time microbenchmarks measure one aspect of performance traces replay recorded accesses cache, branch, register Simulation at many levels ISA, cycle accurate, RTL, gate, circuit trade fidelity for simulation rate Area and delay estimation Analysis e.g., queuing theory MOVE 39% BR 20% LOAD 20% STORE 10% ALU 11% LD 5EA3 ST 31FF …. LD 1EA2 …. Digital System Architecture September 11, 2018

66 204521 Digital System Architecture
Benchmarks Microbenchmarks measure one performance dimension cache bandwidth main memory bandwidth procedure call overhead FP performance weighted combination of microbenchmark performance is a good predictor of application performance gives insight into the cause of performance bottlenecks Macrobenchmarks application execution time measures overall performance, but on just one application Perf. Dimensions Applications Micro Macro Digital System Architecture September 11, 2018

67 Some Warnings about Benchmarks
Benchmarks measure the whole system application compiler operating system architecture implementation Popular benchmarks typically reflect yesterday’s programs computers need to be designed for tomorrow’s programs Benchmark timings often very sensitive to alignment in cache location of data on disk values of data Benchmarks can lead to inbreeding or positive feedback if you make an operation fast (slow) it will be used more (less) often so you make it faster (slower) and it gets used even more (less) and so on… Digital System Architecture September 11, 2018

68 Choosing Programs To Evaluate Performance
Levels of programs or benchmarks that could be used to evaluate performance: Actual Target Workload: Full applications that run on the target machine. Real Full Program-based Benchmarks: Select a specific mix or suite of programs that are typical of targeted applications or workload (e.g SPEC95, SPEC CPU2000). Small “Kernel” Benchmarks: Key computationally-intensive pieces extracted from real programs. Examples: Matrix factorization, FFT, tree search, etc. Best used to test specific aspects of the machine. Microbenchmarks: Small, specially written programs to isolate a specific aspect of performance characteristics: Processing: integer, floating point, local memory, input/output, etc. Digital System Architecture September 11, 2018

69 204521 Digital System Architecture
Types of Benchmarks Cons Pros Very specific. Non-portable. Complex: Difficult to run, or measure. Representative Actual Target Workload Portable. Widely used. Measurements useful in reality. Less representative than actual workload. Full Application Benchmarks Easy to “fool” by designing hardware to run them well. Small “Kernel” Benchmarks Easy to run, early in the design cycle. Peak performance results may be a long way from real application performance Identify peak performance and potential bottlenecks. Microbenchmarks Digital System Architecture September 11, 2018

70 SPEC: System Performance Evaluation Cooperative
The most popular and industry-standard set of CPU benchmarks. SPECmarks, 1989: 10 programs yielding a single number (“SPECmarks”). SPEC92, 1992: SPECInt92 (6 integer programs) and SPECfp92 (14 floating point programs). SPEC95, 1995: SPECint95 (8 integer programs): go, m88ksim, gcc, compress, li, ijpeg, perl, vortex SPECfp95 (10 floating-point intensive programs): tomcatv, swim, su2cor, hydro2d, mgrid, applu, turb3d, apsi, fppp, wave5 Performance relative to a Sun SuperSpark I (50 MHz) which is given a score of SPECint95 = SPECfp95 = 1 SPEC CPU2000, 1999: CINT2000 (11 integer programs). CFP2000 (14 floating-point intensive programs) Performance relative to a Sun Ultra5_10 (300 MHz) which is given a score of SPECint2000 = SPECfp2000 = 100 Digital System Architecture September 11, 2018

71 204521 Digital System Architecture
SPEC First Round One program: 99% of time in single line of code New front-end compiler could improve dramatically Digital System Architecture September 11, 2018

72 Ratio to VAX: Time: Weighted Time:
Impact of Means on SPECmark89 for IBM 550 (without and with special compiler option) Ratio to VAX: Time: Weighted Time: Program Before After Before After Before After gcc espresso spice doduc nasa li eqntott matrix fpppp tomcatv Mean Geometric Arithmetic Weighted Arith. Ratio 1.33 Ratio 1.16 Ratio 1.09 Digital System Architecture September 11, 2018

73 204521 Digital System Architecture
SPEC CPU2000 Programs Benchmark Language Descriptions 164.gzip C Compression 175.vpr C FPGA Circuit Placement and Routing 176.gcc C C Programming Language Compiler 181.mcf C Combinatorial Optimization 186.crafty C Game Playing: Chess 197.parser C Word Processing 252.eon C++ Computer Visualization 253.perlbmk C PERL Programming Language 254.gap C Group Theory, Interpreter 255.vortex C Object-oriented Database 256.bzip2 C Compression 300.twolf C Place and Route Simulator 168.wupwise Fortran 77 Physics / Quantum Chromodynamics 171.swim Fortran 77 Shallow Water Modeling 172.mgrid Fortran 77 Multi-grid Solver: 3D Potential Field 173.applu Fortran 77 Parabolic / Elliptic Partial Differential Equations 177.mesa C 3-D Graphics Library 178.galgel Fortran 90 Computational Fluid Dynamics 179.art C Image Recognition / Neural Networks 183.equake C Seismic Wave Propagation Simulation 187.facerec Fortran 90 Image Processing: Face Recognition 188.ammp C Computational Chemistry 189.lucas Fortran 90 Number Theory / Primality Testing 191.fma3d Fortran 90 Finite-element Crash Simulation 200.sixtrack Fortran 77 High Energy Nuclear Physics Accelerator Design 301.apsi Fortran 77 Meteorology: Pollutant Distribution CINT2000 (Integer) CFP2000 (Floating Point) Programs application domain: Engineering and scientific computation Source: Digital System Architecture September 11, 2018

74 Top 20 SPEC CPU2000 Results (As of March 2002)
Top 20 SPECint2000 Top 20 SPECfp2000 # MHz Processor int peak int base MHz Processor fp peak fp base POWER POWER Pentium Alpha 21264C Pentium 4 Xeon UltraSPARC-III Cu Athlon XP Pentium 4 Xeon Alpha 21264C Pentium Pentium III Alpha 21264B UltraSPARC-III Cu Itanium Athlon MP Alpha 21264A PA-RISC Athlon XP Alpha 21264B PA-RISC Athlon Athlon MP Alpha 21264A MIPS R MIPS R SPARC64 GP SPARC64 GP UltraSPARC-III UltraSPARC-III Athlon PA-RISC Pentium III POWER RS64-IV PA-RISC Pentium III Xeon POWER3-II Itanium Alpha MIPS R MIPS R Source: Digital System Architecture September 11, 2018

75 Common Benchmarking Mistakes (1/2)
Only average behavior represented in test workload Skewness of device demands ignored Loading level controlled inappropriately Caching effects ignored Buffer sizes not appropriate Inaccuracies due to sampling ignored Digital System Architecture September 11, 2018

76 Common Benchmarking Mistakes (2/2)
Ignoring monitoring overhead Not validating measurements Not ensuring same initial conditions Not measuring transient (cold start) performance Using device utilizations for performance comparisons Collecting too much data but doing too little analysis Digital System Architecture September 11, 2018

77 Architectural Performance Laws and Rules of Thumb
Digital System Architecture September 11, 2018

78 Measurement and Evaluation
Architecture is an iterative process: Searching the space of possible designs Make selections Evaluate the selections made Good measurement tools are required to accurately evaluate the selection. Digital System Architecture September 11, 2018

79 204521 Digital System Architecture
Measurement Tools Benchmarks, Traces, Mixes Cost, delay, area, power estimation Simulation (many levels) ISA, RTL, Gate, Circuit Queuing Theory Rules of Thumb Fundamental Laws Digital System Architecture September 11, 2018

80 Measuring and Reporting Performance
What do we mean by one Computer is faster than another? program runs less time Response time or execution time time that users see the output Elapsed time A latency to complete a task including disk accesses, memory accesses, I/O activities, operating system overhead Throughput total amount of work done in a given time Digital System Architecture September 11, 2018

81 204521 Digital System Architecture
Performance “Increasing and decreasing” ????? We use the term “improve performance” or “ improve execution time” When we mean increase performance and decrease execution time . improve performance = increase performance improve execution time = decrease execution time Digital System Architecture September 11, 2018

82 Metrics of Performance
Application Answers per month Operations per second Programming Language Compiler (millions) of Instructions per second: MIPS (millions) of (FP) operations per second: MFLOP/s ISA Datapath Megabytes per second Control Function Units Cycles per second (clock rate) Transistors Wires Pins Digital System Architecture September 11, 2018

83 Does Anybody Really Know What Time it is?
UNIX Time Command: 90.7u 12.9s 2:39 65% User CPU Time (Time spent in program) System CPU Time (Time spent in OS) Elapsed Time (Response Time = 159 Sec.) ( )/159 * 100 = 65%, % of lapsed time that is CPU time. 45% of the time spent in I/O or running other programs Digital System Architecture September 11, 2018

84 204521 Digital System Architecture
Example UNIX time command 90.7u :39 65% user CPU time is 90.7 sec system CPU time is 12.9 sec elapsed time is 2 min 39 sec. (159 sec) % of elapsed time that is CPU time is = 65% 159 Digital System Architecture September 11, 2018

85 204521 Digital System Architecture
Time CPU time time the CPU is computing not including the time waiting for I/O or running other program User CPU time CPU time spent in the program System CPU time CPU time spent in the operating system performing task requested by the program decrease execution time CPU time = User CPU time + System CPU time Digital System Architecture September 11, 2018

86 204521 Digital System Architecture
Performance System Performance elapsed time on unloaded system CPU performance user CPU time on an unloaded system Digital System Architecture September 11, 2018

87 Two notions of “performance”
Plane Boeing 747 BAD/Sud Concorde Speed 610 mph 1350 mph Passengers 470 132 Throughput 286,700 178,200 DC to Paris 6.5 hours 3 hours Which has higher performance? ° Time to do the task (Execution Time) – execution time, response time, latency ° Tasks per day, hour, week, sec, ns. .. – throughput, bandwidth Response time and throughput often are in opposition Digital System Architecture September 11, 2018

88 204521 Digital System Architecture
Example Time of Concorde vs. Boeing 747? Concord is 1350 mph / 610 mph = 2.2 times faster = 6.5 hours / 3 hours Throughput of Concorde vs. Boeing 747 ? Concord is 178,200 pmph / 286,700 pmph = 0.62 “times faster” Boeing is 286,700 pmph / 178,200 pmph = “times faster” Boeing is 1.6 times (“60%”)faster in terms of throughput Concord is 2.2 times (“120%”) faster in terms of flying time We will focus primarily on execution time for a single job Digital System Architecture September 11, 2018

89 Computer Performance Measures: Program Execution Time (1/2)
For a specific program compiled to run on a specific machine (CPU) “A”, the following parameters are provided: The total instruction count of the program. The average number of cycles per instruction (average CPI). Clock cycle of machine “A” Digital System Architecture September 11, 2018

90 Computer Performance Measures: Program Execution Time (2/2)
How can one measure the performance of this machine running this program? Intuitively the machine is said to be faster or has better performance running this program if the total execution time is shorter. Thus the inverse of the total measured program execution time is a possible performance measure or metric: PerformanceA = 1 / Execution TimeA How to compare performance of different machines? What factors affect performance? How to improve performance?!!!! Digital System Architecture September 11, 2018

91 Comparing Computer Performance Using Execution Time
To compare the performance of two machines (or CPUs) “A”, “B” running a given specific program: PerformanceA = 1 / Execution TimeA PerformanceB = 1 / Execution TimeB Machine A is n times faster than machine B means (or slower? if n < 1) : Speedup = n = = PerformanceA PerformanceB Execution TimeB Execution TimeA Digital System Architecture September 11, 2018

92 204521 Digital System Architecture
Example For a given program: Execution time on machine A: ExecutionA = 1 second Execution time on machine B: ExecutionB = 10 seconds The performance of machine A is 10 times the performance of machine B when running this program, or: Machine A is said to be 10 times faster than machine B when running this program. The two CPUs may target different ISAs provided the program is written in a high level language (HLL) Digital System Architecture September 11, 2018

93 CPU Execution Time: The CPU Equation
A program is comprised of a number of instructions executed , I Measured in: instructions/program The average instruction executed takes a number of cycles per instruction (CPI) to be completed. Measured in: cycles/instruction, CPI CPU has a fixed clock cycle time C = 1/clock rate Measured in: seconds/cycle CPU execution time is the product of the above three parameters as follows: CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle T = I x CPI x C execution Time per program in seconds Number of instructions executed Average CPI for program CPU Clock Cycle Digital System Architecture September 11, 2018

94 CPU Execution Time: Example
A Program is running on a specific machine with the following parameters: Total executed instruction count: 10,000,000 instructions Average CPI for the program: cycles/instruction. CPU clock rate: 200 MHz. (clock cycle = 5x10-9 seconds) What is the execution time for this program: CPU time = Instruction count x CPI x Clock cycle = ,000, x x 1 / clock rate = ,000, x x 5x10-9 = seconds CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle Digital System Architecture September 11, 2018

95 Aspects of CPU Execution Time
CPU Time = Instruction count x CPI x Clock cycle Instruction Count I Clock Cycle C CPI Depends on: CPU Organization Technology (VLSI) Program Used Compiler ISA T = I x CPI x C (executed) (Average CPI) Digital System Architecture September 11, 2018

96 Factors Affecting CPU Performance
CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle Instruction Count I CPI Clock Cycle C Program X X Compiler X X Instruction Set Architecture (ISA) X X Organization (CPU Design) X X Technology (VLSI) X Digital System Architecture September 11, 2018

97 Performance Comparison: Example
From the previous example: A Program is running on a specific machine with the following parameters: Total executed instruction count, I: ,000,000 instructions Average CPI for the program: cycles/instruction. CPU clock rate: 200 MHz. Using the same program with these changes: A new compiler used: New instruction count 9,500,000 New CPI: 3.0 Faster CPU implementation: New clock rate = 300 MHZ What is the speedup with the changes? Speedup = (10,000,000 x x 5x10-9) / (9,500,000 x 3 x 3.33x10-9 ) = / = 1.32 or 32 % faster after changes. Speedup = Old Execution Time = Iold x CPIold x Clock cycleold New Execution Time Inew x CPInew x Clock Cyclenew Digital System Architecture September 11, 2018

98 Instruction Types & CPI
Given a program with n types or classes of instructions executed on a given CPU with the following characteristics: Ci = Count of instructions of typei CPIi = Cycles per instruction for typei Then: CPI = CPU Clock Cycles / Instruction Count I Where: Instruction Count I = S Ci i = 1, 2, …. n Digital System Architecture September 11, 2018

99 Instruction Types & CPI: An Example
An instruction set has three instruction classes: Two code sequences have the following instruction counts: CPU cycles for sequence 1 = 2 x x x 3 = 10 cycles CPI for sequence 1 = clock cycles / instruction count = 10 /5 = 2 CPU cycles for sequence 2 = 4 x x x 3 = 9 cycles CPI for sequence 2 = 9 / 6 = 1.5 Instruction class CPI A B C For a specific CPU design Instruction counts for instruction class Code Sequence A B C Digital System Architecture September 11, 2018

100 Instruction Frequency & CPI
Given a program with n types or classes of instructions with the following characteristics: Ci = Count of instructions of typei CPIi = Average cycles per instruction of typei Fi = Frequency or fraction of instruction typei executed = Ci/ total executed instruction count = Ci/ I Then: CPIi x Fi CPI Fraction of total execution time for instructions of type i = Digital System Architecture September 11, 2018

101 Instruction Type Frequency & CPI: A RISC Example
CPIi x Fi CPI Program Profile or Executed Instructions Mix Typical Mix Base Machine (Reg / Reg) Op Freq, Fi CPIi CPIi x Fi % Time ALU 50% % = .5/2.2 Load 20% % = 1/2.2 Store 10% % = .3/2.2 Branch 20% % = .4/2.2 Sum = 2.2 Digital System Architecture September 11, 2018

102 Performance Terminology
“X is n% faster than Y” means: ExTime(Y) Performance(X) n = = ExTime(X) Performance(Y) n = 100(Performance(X) - Performance(Y)) Performance(Y) n = 100(ExTime(Y) - ExTime(X)) ExTime(X) Digital System Architecture September 11, 2018

103 204521 Digital System Architecture
Example Example: Y takes 15 seconds to complete a task, X takes 10 seconds. What % faster is X? n = 100(ExTime(Y) - ExTime(X)) ExTime(X) n = 100( ) 10 n = 50% Digital System Architecture September 11, 2018

104 204521 Digital System Architecture
Speedup Speedup due to enhancement E: ExTime w/o E Performance w/ E Speedup(E) = = ExTime w/ E Performance w/o E Suppose that enhancement E accelerates a fractionenhanced of the task by a factor Speedupenhanced , and the remainder of the task is unaffected, then what is ExTime(E) = ? Speedup(E) = ? Digital System Architecture September 11, 2018

105 204521 Digital System Architecture
Amdahl’s Law States that the performance improvement to be gained from using some faster mode of execution is limited by the fraction of the time faster mode can be used Speedup = Performance for entire task using the enhancement Performance for the entire task without using the enhancement or Speedup = Execution time without the enhancement Execution time for entire task using the enhancement Digital System Architecture September 11, 2018

106 204521 Digital System Architecture
Amdahl’s Law ExTimenew = ExTimeold x (1 - Fractionenhanced) + Fractionenhanced Speedupenhanced 1 ExTimeold ExTimenew Speedupoverall = = (1 - Fractionenhanced) + Fractionenhanced Speedupenhanced Digital System Architecture September 11, 2018

107 Example of Amdahl’s Law
Floating point instructions improved to run 2X; but only 10% of actual instructions are FP ExTimenew = Speedupoverall = Digital System Architecture September 11, 2018

108 Example of Amdahl’s Law
Floating point instructions improved to run 2X; but only 10% of actual instructions are FP ExTimenew = ExTimeold x ( /2) = 0.95 x ExTimeold 1 Speedupoverall = = 1.053 0.95 Digital System Architecture September 11, 2018

109 Performance Enhancement Calculations: Amdahl's Law
The performance enhancement possible due to a given design improvement is limited by the amount that the improved feature is used Amdahl’s Law: Performance improvement or speedup due to enhancement E: Execution Time without E Performance with E Speedup(E) = = Execution Time with E Performance without E Suppose that enhancement E accelerates a fraction F of the execution time by a factor S and the remainder of the time is unaffected then: Execution Time with E = ((1-F) + F/S) X Execution Time without E Hence speedup is given by: Execution Time without E Speedup(E) = = ((1 - F) + F/S) X Execution Time without E (1 - F) + F/S Digital System Architecture September 11, 2018

110 Pictorial Depiction of Amdahl’s Law
Enhancement E accelerates fraction F of original execution time by a factor of S Before: Execution Time without enhancement E: (Before enhancement is applied) shown normalized to 1 = (1-F) + F =1 Unaffected fraction: (1- F) Affected fraction: F F/S Unchanged After: Execution Time with enhancement E: Execution Time without enhancement E Speedup(E) = = Execution Time with enhancement E (1 - F) + F/S Digital System Architecture September 11, 2018

111 Performance Enhancement Example
For the RISC machine with the following instruction mix given earlier: Op Freq Cycles CPI(i) % Time ALU 50% % Load 20% % Store 10% % Branch 20% % If a CPU design enhancement improves the CPI of load instructions from 5 to 2, what is the resulting performance improvement from this enhancement: Fraction enhanced = F = 45% or .45 Unaffected fraction = 100% - 45% = 55% or .55 Factor of enhancement = 5/2 = 2.5 Using Amdahl’s Law: Speedup(E) = = = (1 - F) + F/S /2.5 Digital System Architecture September 11, 2018

112 An Alternative Solution Using CPU Equation
Op Freq Cycles CPI(i) % Time ALU 50% % Load 20% % Store 10% % Branch 20% % If a CPU design enhancement improves the CPI of load instructions from 5 to 2, what is the resulting performance improvement from this enhancement: Old CPI = 2.2 New CPI = .5 x x x x 2 = 1.6 Original Execution Time Instruction count x old CPI x clock cycle Speedup(E) = = New Execution Time Instruction count x new CPI x clock cycle old CPI = = = 1.37 new CPI Which is the same speedup obtained from Amdahl’s Law in the first solution. Digital System Architecture September 11, 2018

113 Performance Enhancement Example (1/2)
A program runs in 100 seconds on a machine with multiply operations responsible for 80 seconds of this time. By how much must the speed of multiplication be improved to make the program four times faster? 100 Desired speedup = 4 = Execution Time with enhancement  Execution time with enhancement = 25 seconds 25 seconds = ( seconds) seconds / n 25 seconds = seconds seconds / n  5 = 80 seconds / n  n = 80/5 = 16 Hence multiplication should be 16 times faster to get a speedup of 4. Digital System Architecture September 11, 2018

114 Performance Enhancement Example (2/2)
For the previous example with a program running in 100 seconds on a machine with multiply operations responsible for 80 seconds of this time. By how much must the speed of multiplication be improved to make the program five times faster? 100 Desired speedup = 5 = Execution Time with enhancement  Execution time with enhancement = 20 seconds 20 seconds = ( seconds) seconds / n 20 seconds = seconds seconds / n  0 = 80 seconds / n No amount of multiplication speed improvement can achieve this. Digital System Architecture September 11, 2018

115 Extending Amdahl's Law To Multiple Enhancements
Suppose that enhancement Ei accelerates a fraction Fi of the execution time by a factor Si and the remainder of the time is unaffected then: Note: All fractions Fi refer to original execution time before the enhancements are applied. . Digital System Architecture September 11, 2018

116 Amdahl's Law With Multiple Enhancements: Example
Three CPU performance enhancements are proposed with the following speedups and percentage of the code execution time affected: Speedup1 = S1 = 10 Percentage1 = F1 = 20% Speedup2 = S2 = 15 Percentage1 = F2 = 15% Speedup3 = S3 = 30 Percentage1 = F3 = 10% While all three enhancements are in place in the new design, each enhancement affects a different portion of the code and only one enhancement can be used at a time. What is the resulting overall speedup? Speedup = 1 / [( ) / / /30)] = 1 / [ ] = 1 / = Digital System Architecture September 11, 2018

117 Pictorial Depiction of Example
Before: Execution Time with no enhancements: 1 S1 = 10 S2 = 15 S3 = 30 Unaffected, fraction: .55 F1 = .2 F2 = .15 F3 = .1 / 10 / 15 / 30 Unchanged Unaffected, fraction: .55 After: Execution Time with enhancements: = .5833 Speedup = 1 / = 1.71 Note: All fractions (Fi , i = 1, 2, 3) refer to original execution time. Digital System Architecture September 11, 2018

118 204521 Digital System Architecture
Means Can be weighted. Represents total execution time Arithmetic mean Harmonic mean Consistent independent of reference Geometric mean Digital System Architecture September 11, 2018

119 How to Summarize Performance (1/2)
Arithmetic mean (weighted arithmetic mean) tracks execution time: Harmonic mean (weighted harmonic mean) of rates (e.g., MFLOPS) tracks execution time: Digital System Architecture September 11, 2018

120 How to Summarize Performance (2/2)
Normalized execution time is handy for scaling performance (e.g., X times faster than SPARCstation 10) Arithmetic mean impacted by choice of reference machine Use the geometric mean for comparison: Independent of chosen machine but not good metric for total execution time Digital System Architecture September 11, 2018

121 Summarizing Performance
Arithmetic mean Average execution time Gives more weight to longer-running programs Weighted arithmetic mean More important programs can be emphasized But what do we use as weights? Different weight will make different machines look better (see Figure 1.16 in your text book – Hennessey/Patterson 3rd Edition) (see board for extension of previous example) Digital System Architecture September 11, 2018

122 Normalizing & the Geometric Mean
Normalized execution time Take a reference machine R Time to run A on X / Time to run A on R Geometric mean Neat property of the geometric mean: Consistent whatever the reference machine Do not use the arithmetic mean for normalized execution times Digital System Architecture September 11, 2018

123 Which Machine is “Better”?
Computer A Computer B Computer C Program P1(sec) Program P Total Time Digital System Architecture September 11, 2018

124 Weighted Arithmetic Mean
Assume three weighting schemes P1/P Comp A Comp B Comp C .5/ .909/ .999/ Digital System Architecture September 11, 2018

125 Performance Evaluation
Given sales is a function of performance relative to the competition, big investment in improving product as reported by performance summary Good products created then have: Good benchmarks Good ways to summarize performance If benchmarks/summary inadequate, then choose between improving product for real programs vs. improving product to get more sales; Sales almost always wins! Execution time is the measure of performance Digital System Architecture September 11, 2018

126 Computer Performance Measures : MIPS Rating (1/3)
For a specific program running on a specific CPU the MIPS rating is a measure of how many millions of instructions are executed per second: MIPS Rating = Instruction count / (Execution Time x 106) = Instruction count / (CPU clocks x Cycle time x 106) = (Instruction count x Clock rate) / (Instruction count x CPI x 106) = Clock rate / (CPI x 106) Major problem with MIPS rating: As shown above the MIPS rating does not account for the count of instructions executed (I). A higher MIPS rating in many cases may not mean higher performance or better execution time. i.e. due to compiler design variations. Digital System Architecture September 11, 2018

127 Computer Performance Measures : MIPS Rating (2/3)
In addition the MIPS rating: Does not account for the instruction set architecture (ISA) used. Thus it cannot be used to compare computers/CPUs with different instruction sets. Easy to abuse: Program used to get the MIPS rating is often omitted. Often the Peak MIPS rating is provided for a given CPU which is obtained using a program comprised entirely of instructions with the lowest CPI for the given CPU design which does not represent real programs. Digital System Architecture September 11, 2018

128 Computer Performance Measures : MIPS Rating (3/3)
Under what conditions can the MIPS rating be used to compare performance of different CPUs? The MIPS rating is only valid to compare the performance of different CPUs provided that the following conditions are satisfied: The same program is used (actually this applies to all performance metrics) The same ISA is used The same compiler is used (Thus the resulting programs used to run on the CPUs and obtain the MIPS rating are identical at the machine code level including the same instruction count) Digital System Architecture September 11, 2018

129 Compiler Variations, MIPS, Performance: An Example (1/2)
For the machine with instruction classes: For a given program two compilers produced the following instruction counts: The machine is assumed to run at a clock rate of 100 MHz Instruction class CPI A B C Instruction counts (in millions) for each instruction class Code from: A B C Compiler Compiler Digital System Architecture September 11, 2018

130 Compiler Variations, MIPS, Performance: An Example (2/2)
MIPS = Clock rate / (CPI x 106) = 100 MHz / (CPI x 106) CPI = CPU execution cycles / Instructions count CPU time = Instruction count x CPI / Clock rate For compiler 1: CPI1 = (5 x x x 3) / ( ) = 10 / 7 = 1.43 MIP Rating1 = 100 / (1.428 x 106) = 70.0 CPU time1 = (( ) x 106 x 1.43) / (100 x 106) = seconds For compiler 2: CPI2 = (10 x x x 3) / ( ) = 15 / 12 = 1.25 MIPS Rating2 = 100 / (1.25 x 106) = 80.0 CPU time2 = (( ) x 106 x 1.25) / (100 x 106) = seconds Digital System Architecture September 11, 2018

131 Computer Performance Measures :MFLOPS (1/2)
A floating-point operation is an addition, subtraction, multiplication, or division operation applied to numbers represented by a single or a double precision floating-point representation. MFLOPS, for a specific program running on a specific computer, is a measure of millions of floating point-operation (megaflops) per second: MFLOPS = Number of floating-point operations / (Execution time x 106 ) MFLOPS rating is a better comparison measure between different machines (applies even if ISAs are different) than the MIPS rating. Applicable even if ISAs are different Digital System Architecture September 11, 2018

132 Computer Performance Measures :MFLOPS (2/2)
Program-dependent: Different programs have different percentages of floating-point operations present. i.e compilers have no floating- point operations and yield a MFLOPS rating of zero. Dependent on the type of floating-point operations present in the program. Peak MFLOPS rating for a CPU: Obtained using a program comprised entirely of the simplest floating point instructions (with the lowest CPI) for the given CPU design which does not represent real floating point programs. Digital System Architecture September 11, 2018

133 Other ways to measure performance
(1) Use MIPS (millions of instructions/second) MIPS is a rate of operations/unit time. Performance can be specified as the inverse of execution time so faster machines have a higher MIPS rating So, bigger MIPS = faster machine. Right? Instruction Count Clock Rate MIPS = = Exec. Time x 106 CPI x 106 Digital System Architecture September 11, 2018

134 204521 Digital System Architecture
Wrong!!! 3 significant problems with using MIPS: Problem 1: MIPS is instruction set dependent. (And different computer brands usually have different instruction sets) Problem 2: MIPS varies between programs on the same computer Problem 3: MIPS can vary inversely to performance! Let’s look at an example of why MIPS doesn’t work… Remove these problems and have the class talk about it Digital System Architecture September 11, 2018

135 Instruction counts (in millions) for each
A MIPS Example (1) Consider the following computer: Instruction counts (in millions) for each instruction class Code from: A B C Compiler 1 5 1 Compiler 2 10 The machine runs at 100MHz. Instruction A requires 1 clock cycle, Instruction B requires 2 clock cycles, Instruction C requires 3 clock cycles. n S CPIi x Ci Note important formula! CPU Clock Cycles i =1 ! CPI = = Instruction Count Instruction Count Digital System Architecture September 11, 2018

136 204521 Digital System Architecture
A MIPS Example (2) count cycles [(5x1) + (1x2) + (1x3)] x 106 CPI1 = = 10/7 = 1.43 ( ) x 106 cycles MIPS1 = 1.43 100 MHz 69.9 = [(10x1) + (1x2) + (1x3)] x 106 CPI2 = = 15/12 = 1.25 ( ) x 106 Maybe remove this all together and have the class do a very simple calculation… 100 MHz So, compiler 2 has a higher MIPS rating and should be faster? MIPS2 = = 80.0 1.25 Digital System Architecture September 11, 2018

137 Therefore program 1 is faster despite a lower MIPS!
A MIPS Example (3) Now let’s compare CPU time: Note important formula! Instruction Count x CPI ! CPU Time = Clock Rate 7 x 106 x 1.43 CPU Time1 = = 0.10 seconds 100 x 106 12 x 106 x 1.25 More evidence that the only true measure of performance is execution time? Maybe remove this slide too and ask the class how we should compare? It really is a simple calculation… and I can just have the answer written down… CPU Time2 = = 0.15 seconds 100 x 106 Therefore program 1 is faster despite a lower MIPS! Digital System Architecture September 11, 2018


Download ppt "Course Introduction and Overview"

Similar presentations


Ads by Google