Download presentation
Published byWinifred Melanie Carpenter Modified over 9 years ago
1
CS 465 Computer Architecture Fall 2009 Lecture 01: Introduction
Daniel Barbará ( cs.gmu.edu/~dbarbara) [Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, UCB] Other handouts Course schedule with due dates To handout next time HW#1 Combinations to AV system, etc (1988 in 113 IST) Call AV hot line at
2
Course Administration
Instructor: Daniel Barbará 4420 Eng. Bldg. Text: Required: Computer Organization & Design – The Hardware Software Interface, Patterson & Hennessy, the 4th Edition
3
Grading Information Grade determinates Course prerequisites
Midterm Exam ~25% Final Exam ~35% Homeworks ~40% Due at the beginning of class (or, if its code to be submitted electronically, by 17:00 on the due date). No late assignments will be accepted. Course prerequisites grade of C or better in CS 367 Note: evening midterm exam
4
Acknowledgements Slides adopted from Dr. Zhong
Contributions from Dr. Setia Slides also adopt materials from many other universities IMPORTANT: Slides are not intended as replacement for the text You spent the money on the book, please read it!
5
Course Topics (Tentative)
Instruction set architecture (Chapter 2) MIPS Arithmetic operations & data (Chapter 3) System performance (Chapter 4) Processor (Chapter 5) Datapath and control Pipelining to improve performance (Chapter 6) Memory hierarchy (Chapter 7) I/O (Chapter 8)
6
Focus of the Course How computers work
MIPS instruction set architecture The implementation of MIPS instruction set architecture – MIPS processor design Issues affecting modern processors Pipelining – processor performance improvement Cache – memory system, I/O systems
7
Why Learn Computer Architecture?
You want to call yourself a “computer scientist” Computer architecture impacts every other aspect of computer science You need to make a purchasing decision or offer “expert” advice You want to build software people use – sell many, many copies- (need performance) Both hardware and software affect performance Algorithm determines number of source-level statements Language/compiler/architecture determine machine instructions (Chapter 2 and 3) Processor/memory determine how fast instructions are executed (Chapter 5, 6, and 7) Assessing and understanding performance(Chapter 4)
8
Outline Today Course logistics Computer architectures overview
Trends in computer architectures
9
Computer Systems Software Hardware
Application software – Word Processors, , Internet Browsers, Games Systems software – Compilers, Operating Systems Hardware CPU Memory I/O devices (mouse, keyboard, display, disks, networks,……..)
10
Software O p e r a t i n g s y m A l c o f w T E X V u F I / d v b C S
11
How Do the Pieces Fit Together?
Application Operating System Compiler Firmware Instruction Set Architecture Memory system Instr. Set Proc. I/O system Datapath & Control Digital Design Circuit Design For class handout Coordination of many levels of abstraction Under a rapidly changing set of forces Design, measurement, and evaluation
12
Instruction Set Architecture
software instruction set hardware One of the most important abstractions is ISA A critical interface between HW and SW Example: MIPS Desired properties Convenience (from software side) Efficiency (from hardware side) Contract between software and hardware
13
What is Computer Architecture
Programmer’s view: a pleasant environment Operating system’s view: a set of resources (hw & sw) System architecture view: a set of components Compiler’s view: an instruction set architecture with OS help Microprocessor architecture view: a set of functional units VLSI designer’s view: a set of transistors implementing logic Mechanical engineer’s view: a heater!
14
What is Computer Architecture
Patterson & Hennessy: Computer architecture = Instruction set architecture + Machine organization + Hardware For this course, computer architecture mainly refers to ISA (Instruction Set Architecture) Programmer-visible, serves as the boundary between the software and hardware Modern ISA examples: MIPS, SPARC, PowerPC, DEC Alpha This class will focus on the ISA but not its implementation Analogy: we can talk about the functions of a digital clock (keeping time, displaying the time, setting the alarm) independently from its implementation (quartz crystal, LED displays, plastic buttons)
15
Organization and Hardware
Organization: high-level aspects of a computer’s design Principal components: memory, CPU, I/O, … How components are interconnected How information flows between components E.g. AMD Opteron 64 and Intel Pentium 4: same ISA but different organizations Hardware: detailed logic design and the packaging technology of a computer E.g. Pentium 4 and Mobile Pentium 4: nearly identical organizations but different hardware details
16
Types of computers and their applications
Desktop Run third-party software Office to home applications 30 years old Servers Modern version of what used to be called mainframes, minicomputers and supercomputers Large workloads Built using the same technology in desktops but higher capacity Expandable Scalable Reliable Large spectrum: from low-end (file storage, small businesses) to supercomputers (high end scientific and engineering applications) Gigabytes to Terabytes to Petabytes of storage Examples: file servers, web servers, database servers
17
Types of computers… Embedded
Microprocessors everywhere! (washing machines, cell phones, automobiles, video games) Run one or a few applications Specialized hardware integrated with the application (not your common processor) Usually stringent limitations (battery power) High tolerance for failure (don’t want your airplane avionics to fail!) Becoming ubiquitous Engineered using processor cores The core allows the engineer to integrate other functions into the processor for fabrication on the same chip Using hardware description languages: Verilog, VHDL
18
Where is the Market? Millions of Computers
For “definitions” of desktop, servers, supercomputers (100’s to 1000’s of processors, Gbytes to Tbytes of main memory, Tbytes to Pbytes of secondary storage), and embedded systems (cell phones, automobile control, video games, entertainment systems (digital TVs), PDAs, etc.). The computer (IT) industry is responsible for almost 10% of the GNP of the US. The embedded market has shown the strongest growth (40% compounded annual growth compared to only 9% for desktops – where do laptops fit?). This chart/number does not include the low-end 8-bit and 16-bit embedded processors that are everywhere! This is a good slide to talk about the other performance metrics in addition to speed (or see if the students can come up with them) including Power, space/volume, memory space, cost, reliability
19
In this class you will learn
How programs written in a high-level language (e.g., Java) translate into the language of the hardware and how the hardware executes them. The interface between software and hardware and how software instructs hardware to perform the needed functions. The factors that determine the performance of a program The techniques that hardware designers employ to improve performance. As a consequence, you will understand what features may make one computer design better than another for a particular application
20
High-level to Machine Language
High-level language program (in C) Compiler Assembly language program (for MIPS) Assembler Binary machine language program (for MIPS)
21
Evolution… In the beginning there were only bits… and people spent countless hours trying to program in machine language Finally before everybody went insane, the assembler was invented: write in mnemonics called assembly language and let the assembler translate (a one to one translation) Add A,B This wasn’t for everybody, obviously… (imagine how modern applications would have been possible in assembly), so high-level language were born (and with them compilers to translate to assembly, a many-to-one translation) C= A*(SQRT(B)+3.0)
22
THE BIG IDEA Levels of abstraction: each layer provides its own (simplified) view and hides the details of the next.
23
Instruction Set Architecture (ISA)
ISA: An abstract interface between the hardware and the lowest level software of a machine that encompasses all the information necessary to write a machine language program that will run correctly, including instructions, registers, memory access, I/O, and so on. “... the attributes of a [computing] system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls, the logic design, and the physical implementation.” – Amdahl, Blaauw, and Brooks, 1964 Enables implementations of varying cost and performance to run identical software ABI (application binary interface): The user portion of the instruction set plus the operating system interfaces used by application programmers. Defines a standard for binary portability across computers.
24
ISA Type Sales Millions of Processor Only includes 32- and 64-bit processors Others includes Samsung, HP, AMD, TI, Transmeta (same ISA as IA-32), … PowerPoint “comic” bar chart with approximate values (see text for correct values)
25
Organization of a computer
26
Anatomy of Computer 5 classic components Keyboard, Mouse Disk (where
Personal Computer Keyboard, Mouse Computer Processor Memory (where programs, data live when running) Devices Disk (where programs, data live when not running) Input Control (“brain”) Datapath (“brawn”) Output Display, Printer Datapath: performs arithmetic operation Control: guides the operation of other components based on the user instructions
27
PC Motherboard Closeup
Processor chip is hidden under the heat sink DRAM memories are on DIMMS (dual in-line memory modules)
28
Inside the Pentium 4
29
Moore’s Law In 1965, Gordon Moore predicted that the number of transistors that can be integrated on a die would double every 18 to 24 months (i.e., grow exponentially with time). Amazingly visionary – million transistor/chip barrier was crossed in the 1980’s. 2300 transistors, 1 MHz clock (Intel 4004) 16 Million transistors (Ultra Sparc III) 42 Million transistors, 2 GHz clock (Intel Xeon) – 2001 55 Million transistors, 3 GHz, 130nm technology, 250mm2 die (Intel Pentium 4) 140 Million transistor (HP PA-8500) Tbyte = 2^40 bytes (or 10^12 bytes) Note that Moore’s law is not about speed predictions but about chip complexity
30
Processor Performance Increase
Intel Pentium 4/3000 DEC Alpha 21264A/667 DEC Alpha 21264/600 Intel Xeon/2000 DEC Alpha 5/500 DEC Alpha 4/266 DEC Alpha 5/300 DEC AXP/500 IBM POWER 100 HP 9000/750 IBM RS6000 Another powerpoint “comic” – note that the y axis is log ! x/y where x is the model number and y is the speed in MHz Rate of performance improvement has been between 1.5 and 1.6 times per year – how much longer will Moore’s Law hold? MIPS M2000 SUN-4/260 MIPS M/120
31
Trend: Microprocessor Capacity
Itanium II: 241 million Pentium 4: 55 million Alpha 21264: 15 million Pentium Pro: 5.5 million PowerPC 620: 6.9 million Alpha 21164: 9.3 million Sparc Ultra: 5.2 million Moore’s Law CMOS improvements: Die size: 2X every 3 yrs Line width: halve / 7 yrs Amazingly visionary
32
Moore’s Law “Transistor capacity doubles every 18-24 months”
“Cramming More Components onto Integrated Circuits” Gordon Moore, Electronics, 1965 # of transistors per cost-effective integrated circuit doubles every 18 months “Transistor capacity doubles every months” Speed 2x / 1.5 years (since ‘85); 100X performance in last decade
33
Trend: Microprocessor Performance
34
Memory Dynamic Random Access Memory (DRAM) The choice for main memory Volatile (contents go away when power is lost) Fast Relatively small DRAM capacity: 2x / 2 years (since ‘96); 64x size improvement in last decade Static Random Access Memory (SRAM) The choice for cache Much faster than DRAM, but less dense and more costly Magnetic disks The choice for secondary memory Non-volatile Slower Relatively large Capacity: 2x / 1 year (since ‘97) 250X size in last decade Solid state (Flash) memory The choice for embedded computers
35
Memory Optical disks Magnetic tape Removable, therefore very large
Slower than disks Magnetic tape Even slower Sequential (non-random) access The choice for archival
36
DRAM Capacity Growth 512M 256M 128M 64M 16M 4M 1M 256K 64K 16K
Memories have quadrupled capacity every 3 years (up until 1996) – a 60% increse per year for 20 years. Now is doubling in capacity every two years. 16K
37
Trend: Memory Capacity
Growth of capacity per chip year size (Mbit) 1986 1 1989 4 512 Now 1.4X/yr, or 2X every 2 years. more than 10000X since 1980!
38
Dramatic Technology Change
State-of-the-art PC when you graduate: (at least…) Processor clock speed: MegaHertz (5.0 GigaHertz) Memory capacity: MegaBytes (4.0 GigaBytes) Disk capacity: 2000 GigaBytes (2.0 TeraBytes) New units! Mega => Giga, Giga => Tera (Kilo, Mega, Giga, Tera, Peta, Exa, Zetta, Yotta = 1024) Come up with a clever mnemonic, fame!
39
Example Machine Organization
Workstation design target 25% of cost on processor 25% of cost on memory (minimum memory size) Rest on I/O devices, power supplies, box Computer CPU Memory Devices Control Input That is, any computer, no matter how primitive or advance, can be divided into five parts: 1. The input devices bring the data from the outside world into the computer. 2. These data are kept in the computer’s memory until ... 3. The datapath request and process them. 4. The operation of the datapath is controlled by the computer’s controller. All the work done by the computer will NOT do us any good unless we can get the data back to the outside world. 5. Getting the data back to the outside world is the job of the output devices. The most COMMON way to connect these 5 components together is to use a network of busses. Datapath Output
40
Example Machine Organization
TI SuperSPARCtm TMS390Z50 in Sun SPARCstation20 MBus Module SuperSPARC Floating-point Unit L2 $ CC DRAM Controller Integer Unit MBus L64852 MBus control M-S Adapter Inst Cache Ref MMU Data Cache STDIO SBus serial Store Buffer SCSI kbd SBus DMA mouse Ethernet audio RTC Bus Interface SBus Cards Boot PROM Floppy
41
MIPS R3000 Instruction Set Architecture
Registers Instruction Categories Load/Store Computational Jump and Branch Floating Point coprocessor Memory Management Special R0 - R31 PC HI LO 3 Instruction Formats: all 32 bits wide OP rs rt rd sa funct OP rs rt immediate OP jump target
42
Defining Performance Which airplane has the best performance?
43
Response Time and Throughput
How long it takes to do a task Throughput Total work done per unit time e.g., tasks/transactions/… per hour How are response time and throughput affected by Replacing the processor with a faster version? Adding more processors? We’ll focus on response time for now…
44
Relative Performance Define Performance = 1/Execution Time
“X is n time faster than Y” Example: time taken to run a program 10s on A, 15s on B Execution TimeB / Execution TimeA = 15s / 10s = 1.5 So A is 1.5 times faster than B
45
Measuring Execution Time
Elapsed time Total response time, including all aspects Processing, I/O, OS overhead, idle time Determines system performance CPU time Time spent processing a given job Discounts I/O time, other jobs’ shares Comprises user CPU time and system CPU time Different programs are affected differently by CPU and system performance
46
CPU Clocking Operation of digital hardware governed by a constant-rate clock Clock period Clock (cycles) Data transfer and computation Update state Clock period: duration of a clock cycle e.g., 250ps = 0.25ns = 250×10–12s Clock frequency (rate): cycles per second e.g., 4.0GHz = 4000MHz = 4.0×109Hz
47
CPU Time Performance improved by Reducing number of clock cycles
Increasing clock rate Hardware designer must often trade off clock rate against cycle count
48
CPU Time Example Computer A: 2GHz clock, 10s CPU time
Designing Computer B Aim for 6s CPU time Can do faster clock, but causes 1.2 × clock cycles How fast must Computer B clock be?
49
Instruction Count and CPI
Instruction Count for a program Determined by program, ISA and compiler Average cycles per instruction Determined by CPU hardware If different instructions have different CPI Average CPI affected by instruction mix
50
CPI Example Computer A: Cycle Time = 250ps, CPI = 2.0
Computer B: Cycle Time = 500ps, CPI = 1.2 Same ISA Which is faster, and by how much? A is faster… …by this much
51
CPI in More Detail If different instruction classes take different numbers of cycles Weighted average CPI Relative frequency
52
CPI Example Alternative compiled code sequences using instructions in classes A, B, C Class A B C CPI for class 1 2 3 IC in sequence 1 IC in sequence 2 4 Sequence 1: IC = 5 Clock Cycles = 2×1 + 1×2 + 2×3 = 10 Avg. CPI = 10/5 = 2.0 Sequence 2: IC = 6 Clock Cycles = 4×1 + 1×2 + 1×3 = 9 Avg. CPI = 9/6 = 1.5
53
Performance Summary The BIG Picture Performance depends on
Algorithm: affects IC, possibly CPI Programming language: affects IC, CPI Compiler: affects IC, CPI Instruction set architecture: affects IC, CPI, Tc
54
Power Trends In CMOS IC technology §1.5 The Power Wall ×30 5V → 1V
×1000
55
Reducing Power Suppose a new CPU has The power wall
85% of capacitive load of old CPU 15% voltage and 15% frequency reduction The power wall We can’t reduce voltage further We can’t remove more heat How else can we improve performance?
56
Uniprocessor Performance
§1.6 The Sea Change: The Switch to Multiprocessors Constrained by power, instruction-level parallelism, memory latency
57
Multiprocessors Multicore microprocessors
More than one processor per chip Requires explicitly parallel programming Compare with instruction level parallelism Hardware executes multiple instructions at once Hidden from the programmer Hard to do Programming for performance Load balancing Optimizing communication and synchronization
58
SPEC CPU Benchmark Programs used to measure performance
Supposedly typical of actual workload Standard Performance Evaluation Corp (SPEC) Develops benchmarks for CPU, I/O, Web, … SPEC CPU2006 Elapsed time to execute a selection of programs Negligible I/O, so focuses on CPU performance Normalize relative to reference machine Summarize as geometric mean of performance ratios CINT2006 (integer) and CFP2006 (floating-point)
59
CINT2006 for Opteron X4 2356 High cache miss rates Name Description
IC×109 CPI Tc (ns) Exec time Ref time SPECratio perl Interpreted string processing 2,118 0.75 0.40 637 9,777 15.3 bzip2 Block-sorting compression 2,389 0.85 817 9,650 11.8 gcc GNU C Compiler 1,050 1.72 0.47 24 8,050 11.1 mcf Combinatorial optimization 336 10.00 1,345 9,120 6.8 go Go game (AI) 1,658 1.09 721 10,490 14.6 hmmer Search gene sequence 2,783 0.80 890 9,330 10.5 sjeng Chess game (AI) 2,176 0.96 0.48 37 12,100 14.5 libquantum Quantum computer simulation 1,623 1.61 1,047 20,720 19.8 h264avc Video compression 3,102 993 22,130 22.3 omnetpp Discrete event simulation 587 2.94 690 6,250 9.1 astar Games/path finding 1,082 1.79 773 7,020 xalancbmk XML parsing 1,058 2.70 1,143 6,900 6.0 Geometric mean 11.7 High cache miss rates
60
SPEC Power Benchmark Power consumption of server at different workload levels Performance: ssj_ops/sec Power: Watts (Joules/sec)
61
Performance (ssj_ops/sec)
SPECpower_ssj2008 for X4 Target Load % Performance (ssj_ops/sec) Average Power (Watts) 100% 231,867 295 90% 211,282 286 80% 185,803 275 70% 163,427 265 60% 140,160 256 50% 118,324 246 40% 920,35 233 30% 70,500 222 20% 47,126 206 10% 23,066 180 0% 141 Overall sum 1,283,590 2,605 ∑ssj_ops/ ∑power 493
62
Pitfall: Amdahl’s Law Improving an aspect of a computer and expecting a proportional improvement in overall performance §1.8 Fallacies and Pitfalls Example: multiply accounts for 80s/100s How much improvement in multiply performance to get 5× overall? Can’t be done! Corollary: make the common case fast
63
Fallacy: Low Power at Idle
Look back at X4 power benchmark At 100% load: 295W At 50% load: 246W (83%) At 10% load: 180W (61%) Google data center Mostly operates at 10% – 50% load At 100% load less than 1% of the time Consider designing processors to make power proportional to load
64
Pitfall: MIPS as a Performance Metric
MIPS: Millions of Instructions Per Second Doesn’t account for Differences in ISAs between computers Differences in complexity between instructions CPI varies between programs on a given CPU
65
Concluding Remarks Cost/performance is improving
Due to underlying technology development Hierarchical layers of abstraction In both hardware and software Instruction set architecture The hardware/software interface Execution time: the best performance measure Power is a limiting factor Use parallelism to improve performance §1.9 Concluding Remarks
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.