The University of Adelaide, School of Computer Science

Slides:



Advertisements
Similar presentations
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative.
Advertisements

COMPUTER ARCHITECTURE & OPERATIONS I Instructor: Yaohang Li.
Chapter 3 Instruction Set Architecture Advanced Computer Architecture COE 501.
Computer Abstractions and Technology
The University of Adelaide, School of Computer Science
Chapter1 Fundamental of Computer Design Dr. Bernard Chen Ph.D. University of Central Arkansas.
CS 203 A: Advanced Computer Architecture
1 ECE 570– Advanced Computer Architecture Dr. Patrick Chiang Winter 2013 Tues/Thurs 2-4PMPM.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative.
Chapter 1 CSF 2009 Computer Performance. Defining Performance Which airplane has the best performance? Chapter 1 — Computer Abstractions and Technology.
1 Lecture 2: System Metrics and Pipelining Today’s topics: (Sections 1.6, 1.7, 1.9, A.1)  Quantitative principles of computer design  Measuring cost.
EET 4250: Chapter 1 Performance Measurement, Instruction Count & CPI Acknowledgements: Some slides and lecture notes for this course adapted from Prof.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative.
Lecture 2: Technology Trends and Performance Evaluation Performance definition, benchmark, summarizing performance, Amdahl’s law, and CPI.
Chapter1 Fundamental of Computer Design Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2010.
Lecture 2: Fundamentals of Computer Design Kai Bu
Computer Architecture- 1 Ping-Liang Lai ( 賴秉樑 ) Chapter 1 Fundamentals of Computer Design Computer Architecture 計算機結構.
Lecture 03: Fundamentals of Computer Design - Trends and Performance Kai Bu
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative.
Lecture 02: Fundamentals of Computer Design - Basics Kai Bu
Last Time Performance Analysis It’s all relative
1 Lecture 1: CS/ECE 3810 Introduction Today’s topics:  Why computer organization is important  Logistics  Modern trends.
Sogang University Advanced Computing System Chap 1. Computer Architecture Hyuk-Jun Lee, PhD Dept. of Computer Science and Engineering Sogang University.
The University of Adelaide, School of Computer Science
1 CS/EE 6810: Computer Architecture Class format:  Most lectures on YouTube *BEFORE* class  Use class time for discussions, clarifications, problem-solving,
MS108 Computer System I Lecture 2 Metrics Prof. Xiaoyao Liang 2014/2/28 1.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology Sections 1.5 – 1.11.
Chapter 1 — Computer Abstractions and Technology — 1 Understanding Performance Algorithm Determines number of operations executed Programming language,
1 Lecture 2: Performance, MIPS ISA Today’s topics:  Performance equations  MIPS instructions Reminder: canvas and class webpage:
CPS3340 COMPUTER ARCHITECTURE Fall Semester, /03/2013 Lecture 3: Computer Performance Instructor: Ashraf Yaseen DEPARTMENT OF MATH & COMPUTER SCIENCE.
Computer Organization Yasser F. O. Mohammad 1. 2 Lecture 1: Introduction Today’s topics:  Why computer organization is important  Logistics  Modern.
Chapter 1 — Computer Abstractions and Technology — 1 Uniprocessor Performance Constrained by power, instruction-level parallelism, memory latency.
1 دیباچه انسهانهای دست دوم بررسی نمی کنند، تکرار می کنند؛ انجام نمی دهند، ادای انجام دادن را در می آورند؛ خلق نمی کنند، نمایش می دهند؛ به توان خود و دیگران.
CS203 – Advanced Computer Architecture
CS203 – Advanced Computer Architecture Performance Evaluation.
Chapter 1 Performance & Technology Trends. Outline What is computer architecture? Performance What is performance: latency (response time), throughput.
Computer Architecture & Operations I
Morgan Kaufmann Publishers Technology Trends and Performance
Lecture 1 Overview of Computer Architecture
CS203 – Advanced Computer Architecture
What is *Computer Architecture*
CS203 – Advanced Computer Architecture
Lecture 2: Performance Today’s topics:
Lecture 2: Performance Evaluation
Lynn Choi School of Electrical Engineering
Computer Architecture & Operations I
Multiprocessing.
Lecture 3: MIPS Instruction Set
Morgan Kaufmann Publishers Computer Abstractions and Technology
Chapter1 Fundamental of Computer Design
Uniprocessor Performance
Morgan Kaufmann Publishers
COSC 3406: Computer Organization
Architecture & Organization 1
Advanced Computer Architecture 5MD00 / 5Z032 Instruction Set Design
The University of Adelaide, School of Computer Science
The University of Adelaide, School of Computer Science
Lecture 2: Performance Today’s topics: Technology wrap-up
Architecture & Organization 1
Computer Architecture
Chapter 1 Fundamentals of Computer Design
Chapter 1 Introduction.
The University of Adelaide, School of Computer Science
Computer Evolution and Performance
Lecture 3: MIPS Instruction Set
The University of Adelaide, School of Computer Science
Chapter 4 Multiprocessors
The University of Adelaide, School of Computer Science
Utsunomiya University
Presentation transcript:

The University of Adelaide, School of Computer Science Computer Architecture A Quantitative Approach, Fifth Edition The University of Adelaide, School of Computer Science 9 November 2018 Chapter 1 Fundamentals of Quantitative Design and Analysis Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science Computer Technology 9 November 2018 Introduction Performance improvements: Improvements in semiconductor technology Feature size, clock speed Improvements in computer architectures HLL (High Level Language) compilers, UNIX based OSes RISC architectures Together have enabled: Lightweight computers Productivity-based programming languages: C#, Java, Python SaaS, Virtualization, Cloud Applications evolution: Speech, sound, images, video, “augmented/extended reality”, “big data” Chapter 2 — Instructions: Language of the Computer

Single Processor Performance Introduction Move to multi-processor RISC

Current Trends in Architecture The University of Adelaide, School of Computer Science 9 November 2018 Current Trends in Architecture Introduction Cannot continue to leverage Instruction-Level parallelism (ILP) Single processor performance improvement ended in 2003 New models for performance: Data-level parallelism (DLP) Thread-level parallelism (TLP) Request-level parallelism (RLP) These require explicit restructuring of the application Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 9 November 2018 Classes of Computers Embedded Computers (19 billion in 2010) Emphasis: price Personal Mobile Device (PMD) smart phones, tablet computers (1.8 billion sold 2010) Emphasis on energy efficiency and real-time Desktop Computers Emphasis on price-performance (0.35 billion) Servers Emphasis on availability (very costly downtime!), scalability, throughput (20 million) Clusters / Warehouse Scale Computers Used for “Software as a Service (SaaS)”, etc. Emphasis on availability ($6M/hour-downtime at Amazon.com!) and price-performance (power=80% of Total Cost!) Sub-class: Supercomputers, emphasis: floating-point performance, fast internal networks, big data analytics Classes of Computers Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 9 November 2018 Parallelism Classes of Computers Classes of parallelism in applications: Data-Level Parallelism (DLP) : Operating on Same Data in Parallel Task-Level Parallelism (TLP): Doing Parallel Tasks Classes of architectural parallelism: Instruction-Level Parallelism (ILP): Instruction pipelining Vector architectures/Graphic Processor Units (GPUs): Applying single Instruction to a collection of data Thread-Level Parallelism: DLP or TLP by tightly coupled hardware that allows Multithreading Request-Level Parallelism: Doing largely decoupled tasks in parallel Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 9 November 2018 Flynn’s Taxonomy Classes of Computers Single instruction stream, single data stream (SISD) Uniprocessor system with ILP Single instruction stream, multiple data streams (SIMD) Vector architectures Multimedia extensions Graphics processor units Multiple instruction streams, single data stream (MISD) No commercial implementation Multiple instruction streams, multiple data streams (MIMD) Tightly-coupled MIMD: thread-level parallelism Loosely-coupled MIMD: request-level parallelism Chapter 2 — Instructions: Language of the Computer

Defining Computer Architecture The University of Adelaide, School of Computer Science 9 November 2018 Defining Computer Architecture “Old” view of computer architecture: Instruction Set Architecture (ISA) design i.e. decisions regarding: registers, memory addressing, addressing modes, instruction operands, available operations, control flow instructions, instruction encoding “Real” computer architecture: Specific requirements of the target machine Design to maximize performance within constraints: cost, power, and availability Includes ISA, microarchitecture (Memory System and its interconnection with CPU), hardware Defining Computer Architecture Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 9 November 2018 Trends in Technology Trends in Technology Integrated circuit technology Transistor density: 35%/year Die size: 10-20%/year Integration overall: 40-55%/year DRAM capacity: 25-40%/year (slowing) Flash capacity: 50-60%/year 15-20X cheaper/bit than DRAM Standard for PMDs Magnetic disk technology: 40%/year 15-25X cheaper/bit then Flash 300-500X cheaper/bit than DRAM Networking technology: Discussed in another cource MOS Technology On chip Cache Multi-Core Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 9 November 2018 Bandwidth and Latency Trends in Technology Bandwidth or throughput Total work done in a given time 10,000-25,000X improvement for processors 300-1200X improvement for memory and disks Latency or response time Time between start and completion of an event 30-80X improvement for processors 6-8X improvement for memory and disks Chapter 2 — Instructions: Language of the Computer

Log-log plot of bandwidth and latency milestones Trends in Technology Log-log plot of bandwidth and latency milestones

The University of Adelaide, School of Computer Science 9 November 2018 Transistors and Wires Trends in Technology Feature size Minimum size of transistor or wire 10 microns in 1971 to 0.032 microns in 2011 Transistor performance scales linearly Wire delay does not improve with feature size and transistor switching delay! Integration density scales quadratically Linear performance and quadratic density growth present two challenges: 1. Power 2. Signal Propagation delay Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 9 November 2018 Power and Energy Problem: Get power in, get power out Three power concerns: Maximum Power to maintain supply voltage Thermal Design Power (TDP) Characterizes sustained power consumption Used as target for power supply and cooling system Lower than peak power, higher than average power consumption Power control via: voltage or temperature dependent clock rate + Thermal overload trip 3. Energy Efficiency : energy consumption per task Example: a CPU with 20% more power + 30% less time per task = 1.2*0.7= 0.84 = better energy efficiency Trends in Power and Energy Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

Dynamic Energy and Power The University of Adelaide, School of Computer Science 9 November 2018 Dynamic Energy and Power Dynamic energy Transistor switch from 0 -> 1 or 1 -> 0 ½ x Capacitive load x Voltage2 Dynamic power ½ x Capacitive load x Voltage2 x Frequency switched Reducing clock rate reduces power, not energy Reducing voltage lowers both: going from 5V to under 1V in 20 years Trends in Power and Energy Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 9 November 2018 Example Trends in Power and Energy Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 9 November 2018 Power Intel 80386 consumed ~ 4 W 3.3 GHz Intel Core i7 consumes 130 W Heat must be dissipated from 1.5 x 1.5 cm chip This is the limit of what can be cooled by air Trends in Power and Energy Chapter 2 — Instructions: Language of the Computer

Increasing energy efficeincy The University of Adelaide, School of Computer Science 9 November 2018 Increasing energy efficeincy Do nothing well: turning off the clock of idle units or cores Dynamic Voltage-Frequency Scaling Low power state for DRAM, disks : imposes wake up delay Dynamic Overclocking (Turbo Boost): Running above the base operating Frequency specially by turning off some cores. Implemented by Intel from Nehalem architecture forward. Is activated when the operating system requests the highest performance state of the processor. Is defined by Advanced Configuration and Power Interface (ACPI) specification, an open standard supported by all major operating systems. Trends in Power and Energy Chapter 2 — Instructions: Language of the Computer

Dynamic Overclocking (Turbo Boost) Frequency increases and decreases in fixed steps. Turbo Boost Max 3.0 was introduced in 2016 with the Broadwell- E microarchitecture.   For Corei7-2920XM, normal operating frequency is 2.5 GHz. Turbo is indicated as: 7/7/9/10 in which the first number is the number of times of 100 MHz supported when four cores are active, the second number is for three cores, the third is for two cores, and the fourth number is for one active core. # of cores active # of Turbo Steps Max frequency Calculation 4 or 3 7 3.20 GHz 2500 + (7 × 100) = 2500 + 700 = 3200 2 9 3.40 GHz 2500 + (9 × 100) = 2500 + 900 = 3400 1 10 3.50 GHz 2500 + (10 × 100) = 2500 + 1000 = 3500

The University of Adelaide, School of Computer Science 9 November 2018 Static Power Static power consumption Currentstatic (leakage current) x Voltage Scales with number of transistors & on chip cache (SRAM) To reduce: power gating of idle sub modules Race-to-halt: operate at maximum speed to prolong idle periods. The new primary evaluation for design innovation Tasks per joule Performance per watt (Instead of performance per mm²) Trends in Power and Energy Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 9 November 2018 Trends in Cost Trends in Cost Cost relative issues Yield: percent of manufactured devices that pass the tests Volume: increasing the volume decreases cost Becoming commodity: increases competition and lowers the cost Chapter 2 — Instructions: Language of the Computer

Chip manufacturing process 8–12 inches in diameter and 12–24 inches long 0.1 inches thick 1 layer of transistors with 2-8 levels of metal conductor, separated by layers of insulators

Integrated Circuit Cost The University of Adelaide, School of Computer Science 9 November 2018 Integrated Circuit Cost Trends in Cost Integrated circuit Bose-Einstein formula: Wafer yield= 100% Defects per unit area = 0.016-0.057 defects per square cm for 40 nm (2010) N = process-complexity factor = 11.5-15.5 (40 nm, 2010) The manufacturing process dictates the wafer cost, wafer yield and defects per unit area The architect’s design affects the die area, which in turn affects the defects and cost per die Chapter 2 — Instructions: Language of the Computer

Copyright © 2012, Elsevier Inc. All rights reserved.

Cost of Die Processed wafer cost = $5500 Cost of 1 cm² die = $13 Cost of 2.25 cm² die = $51 The cost increases relative to square of the area increase Additional costs: testing, packaging, test after packaging, and multilayer fabrication masks

The University of Adelaide, School of Computer Science 9 November 2018 Service accomplishment Service delivered as specified Dependability Dependability Systems alternate between two states of service: Module reliability: Failures In Time (FIT) : no of failures per 1 billion hours Mean time to failure (MTTF) = 10^9/FIT Mean time to repair (MTTR): Time required for service restoration. Mean time between failures (MTBF) = MTTF + MTTR Module availability: MTTF / MTBF Restoration Failure Service interruption Deviation from specified service Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 9 November 2018 Dependability Dependability Total system failure rate = ∑ failure rate of each part Chapter 2 — Instructions: Language of the Computer

Redundancy improves dependability

Measuring Performance The University of Adelaide, School of Computer Science 9 November 2018 Measuring Performance Typical performance metrics: Response time = execution time : desktop Throughput = total amount of work done in a given time: warehouse Speed of X relative to Y Execution timeY / Execution timeX Execution time Wall clock time: includes all system overheads CPU time: only computation time Benchmarks Kernels (e.g. matrix multiply): small, key pieces of real applications Toy programs (e.g. sorting): less than 100 line Benchmark suites: Standard Performance Evaluation Corporation, www.spec.org & Transaction Processing Council, www.tpc.org Measuring Performance Chapter 2 — Instructions: Language of the Computer

SPEC desktop benchmark programs

Summerizing Performance Results SPECRatio

AMD Opteron Vs. Intel Itanium2

The University of Adelaide, School of Computer Science Amdahl’s Law 9 November 2018 Principles Principle of Locality Programs spend 90% of execution time in only 10% of code Focus on the Common Case Amdahl’s Law : performance improvement obtained from optimizing the common case. Chapter 2 — Instructions: Language of the Computer

Using Amdahl’s law to compare design alternatives Copyright © 2012, Elsevier Inc. All rights reserved.

The effect of 4150x improvement in power supply reliability on overall system reliability.  Amdahl’s law requires to know the fraction of time or other resources consumed by new version

The University of Adelaide, School of Computer Science CPU time Calculation 9 November 2018 Principles The Processor Performance Equation Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science Average CPI 9 November 2018 Principles Different instruction types having different CPIs ICi= the number of times instruction i is executed in a program Average Chapter 2 — Instructions: Language of the Computer

Comparing Performance, Price, and Power Comparing performance/price of three small servers with SPECpower: (ssj-ops = server side java operations per second) ssj-ops/W 3034 2357 2696 ssj-ops/W/1000$ 324 254 213

Instruction Set Architecture (ISA) Serves as an interface between software and hardware. Provides a mechanism by which the software tells the hardware what should be done. instruction set High level language code : C, C++, Java, Fortran, hardware Assembly language code: architecture specific statements Machine language code: architecture specific bit patterns software compiler assembler

Instruction Set Design Issues Instruction set design issues include: Where are operands stored? registers, memory, stack, accumulator How many explicit operands are there? 0, 1, 2, or 3 How is the operand location specified? register, immediate, indirect, . . . What type & size of operands are supported? byte, int, float, double, string, vector. . . What operations are supported? add, sub, mul, move, compare . . .

Classifying ISAs Accumulator (before 1960, e.g. 68HC11): 1-address add A acc ¬ acc + mem[A] Stack (1960s to 1970s): 0-address add tos ¬ tos + next Memory-Memory (1970s to 1980s): 2-address add A, B mem[A] ¬ mem[A] + mem[B] 3-address add A, B, C mem[A] ¬ mem[B] + mem[C] Register-Memory (1970s to present, e.g. 80x86): 2-address add R1, A R1 ¬ R1 + mem[A] load R1, A R1 ¬ mem[A] Register-Register (Load/Store, RISC) (1960s to present, e.g. MIPS): 3-address add R1, R2, R3 R1 ¬ R2 + R3 load R1, R2 R1 ¬ mem[R2] store R1, R2 mem[R1] ¬ R2

Code Sequence C = A + B for Four Instruction Sets Stack Accumulator Register (register-memory) Register (load-store) Push A Push B Add Pop C Load A Add B Store C Load R1, A Add R1, B Store C, R1 Load R1,A Load R2, B Add R3, R1, R2 Store C, R3 acc = acc + mem[C] R1 = R1 + mem[C] R3 = R1 + R2 memory memory

Types of Addressing Modes (VAX) Addressing Mode Example Action 1. Register direct Add R4, R3 R4 <- R4 + R3 2. Immediate Add R4, #3 R4 <- R4 + 3 3. Displacement Add R4, 100(R1) R4 <- R4 + M[100 + R1] 4. Register indirect Add R4, (R1) R4 <- R4 + M[R1] 5. Indexed Add R4, (R1 + R2) R4 <- R4 + M[R1 + R2] 6. Direct Add R4, (1000) R4 <- R4 + M[1000] 7. Memory Indirect Add R4, @(R3) R4 <- R4 + M[M[R3]] 8. Autoincrement Add R4, (R2)+ R4 <- R4 + M[R2] R2 <- R2 + d 9. Autodecrement Add R4, (R2)- R4 <- R4 + M[R2] R2 <- R2 - d 10. Scaled Add R4, 100(R2)[R3] R4 <- R4 + M[100 + R2 + R3*d] Studies by [Clark and Emer] indicate that modes 1-4 account for 93% of all operands on the VAX.