Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS61C L22 Pipelining II, Cache I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #22.

Similar presentations


Presentation on theme: "CS61C L22 Pipelining II, Cache I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #22."— Presentation transcript:

1 CS61C L22 Pipelining II, Cache I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #22 – Pipelining II, Cache I 2008-7-29 Wireworld circuits http://www.maa.org/editorial/mathgames/mathgames_05_24_04.html http://www.quinapalus.com/wi-index.html

2 CS61C L22 Pipelining II, Cache I (2) Chae, Summer 2008 © UCB Review: Processor Pipelining (1/2) “Pipeline registers” are added to the datapath/controller to neatly divide the single cycle processor into “pipeline stages”. Optimal Pipeline Each stage is executing part of an instruction each clock cycle. One inst. finishes during each clock cycle. On average, execute far more quickly. What makes this work well? Similarities between instructions allow us to use same stages for all instructions (generally). Each stage takes about the same amount of time as all others: little wasted time.

3 CS61C L22 Pipelining II, Cache I (3) Chae, Summer 2008 © UCB A pipelined datapath From P&H

4 CS61C L22 Pipelining II, Cache I (4) Chae, Summer 2008 © UCB Review: Pipeline (2/2) Pipelining is a BIG IDEA widely used concept What makes it less than perfect? Structural hazards: Conflicts for resources. Suppose we had only one cache?  Need more HW resources Control hazards: Branch instructions effect which instructions come next.  Delayed branch Data hazards: Data flow between instructions.  Forwarding

5 CS61C L22 Pipelining II, Cache I (5) Chae, Summer 2008 © UCB Review Some fixes to hazards Illusion of two memories Register file convention Forwarding Load delay slot All else fails, bubble/stall Latency vs throughput What prevents us from getting n-times speedup, where n is the number of pipeline stages?

6 CS61C L22 Pipelining II, Cache I (6) Chae, Summer 2008 © UCB Graphical Pipeline Representation I n s t r. O r d e r Load Add Store Sub Or I$ Time (clock cycles) I$ ALU Reg I$ D$ ALU Reg D$ Reg I$ D$ Reg ALU Reg D$ Reg D$ ALU (In Reg, right half highlight read, left half write) Reg I$

7 CS61C L22 Pipelining II, Cache I (7) Chae, Summer 2008 © UCB Control Hazard: Branching (1/8) Where do we do the compare for the branch? I$ beq Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

8 CS61C L22 Pipelining II, Cache I (8) Chae, Summer 2008 © UCB Control Hazard: Branching (2/8) We had put branch decision-making hardware in ALU stage therefore two more instructions after the branch will always be fetched, whether or not the branch is taken Desired functionality of a branch if we do not take the branch, don’t waste any time and continue executing normally if we take the branch, don’t execute any instructions after the branch, just go to the desired label

9 CS61C L22 Pipelining II, Cache I (9) Chae, Summer 2008 © UCB Control Hazard: Branching (3/8) Initial Solution: Stall until decision is made insert “no-op” instructions (those that accomplish nothing, just take time) or hold up the fetch of the next instruction (for 2 cycles). Drawback: branches take 3 clock cycles each (assuming comparator is put in ALU stage)

10 CS61C L22 Pipelining II, Cache I (10) Chae, Summer 2008 © UCB Control Hazard: Branching (4/8) Optimization #1: insert special branch comparator in Stage 2 as soon as instruction is decoded (Opcode identifies it as a branch), immediately make a decision and set the new value of the PC Benefit: since branch is complete in Stage 2, only one unnecessary instruction is fetched, so only one no-op is needed Side Note: This means that branches are idle in Stages 3, 4 and 5.

11 CS61C L22 Pipelining II, Cache I (11) Chae, Summer 2008 © UCB Control Hazard: Branching (5/8) Branch comparator moved to Decode stage. I$ beq Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

12 CS61C L22 Pipelining II, Cache I (12) Chae, Summer 2008 © UCB User inserting no-op instruction Control Hazard: Branching (6a/8) add beq nop ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg I$ I n s t r. O r d e r Time (clock cycles) bub ble Impact: 2 clock cycles per branch instruction  slow lw bub ble

13 CS61C L22 Pipelining II, Cache I (13) Chae, Summer 2008 © UCB Controller inserting a single bubble Control Hazard: Branching (6b/8) add beq lw ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg I$ I n s t r. O r d e r Time (clock cycles) bub ble Impact: 2 clock cycles per branch instruction  slow

14 CS61C L22 Pipelining II, Cache I (14) Chae, Summer 2008 © UCB Control Hazard: Branching (7/8) Optimization #2: Redefine branches Old definition: if we take the branch, none of the instructions after the branch get executed by accident New definition: whether or not we take the branch, the single instruction immediately following the branch gets executed (called the branch-delay slot) The term “Delayed Branch” means we always execute inst after branch This optimization is used on the MIPS

15 CS61C L22 Pipelining II, Cache I (15) Chae, Summer 2008 © UCB Control Hazard: Branching (8/8) Notes on Branch-Delay Slot Worst-Case Scenario: can always put a no-op in the branch-delay slot Better Case: can find an instruction preceding the branch which can be placed in the branch-delay slot without affecting flow of the program -re-ordering instructions is a common method of speeding up programs -compiler must be very smart in order to find instructions to do this -usually can find such an instruction at least 50% of the time -Jumps also have a delay slot…

16 CS61C L22 Pipelining II, Cache I (16) Chae, Summer 2008 © UCB Example: Nondelayed vs. Delayed Branch add $1,$2,$3 sub $4, $5,$6 beq $1, $4, Exit or $8, $9,$10 xor $10, $1,$11 Nondelayed Branch add $1,$2,$3 sub $4, $5,$6 beq $1, $4, Exit or $8, $9,$10 xor $10, $1,$11 Delayed Branch Exit:

17 CS61C L22 Pipelining II, Cache I (17) Chae, Summer 2008 © UCB Out-of-Order Laundry: Don’t Wait A depends on D; rest continue; need more resources to allow out-of-order TaskOrderTaskOrder 12 2 AM 6 PM 7 8 9 10 11 1 Time B C D A 30 E F bubble

18 CS61C L22 Pipelining II, Cache I (18) Chae, Summer 2008 © UCB Superscalar Laundry: Parallel per stage More resources, HW to match mix of parallel tasks? TaskOrderTaskOrder 12 2 AM 6 PM 7 8 9 10 11 1 Time B C D A E F (light clothing) (dark clothing) (very dirty clothing) (light clothing) (dark clothing) (very dirty clothing) 30

19 CS61C L22 Pipelining II, Cache I (19) Chae, Summer 2008 © UCB Superscalar Laundry: Mismatch Mix Task mix underutilizes extra resources TaskOrderTaskOrder 12 2 AM 6 PM 7 8 9 10 11 1 Time 30 (light clothing) (dark clothing) (light clothing) A B D C

20 CS61C L22 Pipelining II, Cache I (20) Chae, Summer 2008 © UCB Real-world pipelining problem You’re the manager of a HUGE assembly plant to build computers. Box Problem: need to run 2 hr test before done..help! Main pipeline 10 minutes/ pipeline stage 60 stages Latency: 10hr

21 CS61C L22 Pipelining II, Cache I (21) Chae, Summer 2008 © UCB Real-world pipelining problem solution 1 You remember: “a pipeline frequency is limited by its slowest stage”, so… Box Problem: need to run 2 hr test before done..help! Main pipeline 10 minutes/ pipeline stage 60 stages Latency: 10hr 2hours/ 120hr

22 CS61C L22 Pipelining II, Cache I (22) Chae, Summer 2008 © UCB Real-world pipelining problem solution 2 Create a sub-pipeline! Main pipeline 10 minutes/ pipeline stage 60 stages Box 2hr test (12 CPUs in this pipeline)

23 CS61C L22 Pipelining II, Cache I (23) Chae, Summer 2008 © UCB Peer Instruction (1/2) Assume 1 instr/clock, delayed branch, 5 stage pipeline, forwarding, interlock on unresolved load hazards (after 10 3 loops, so pipeline full) Loop:lw $t0, 0($s1) addu $t0, $t0, $s2 sw $t0, 0($s1) addiu $s1, $s1, -4 bne $s1, $zero, Loop nop How many pipeline stages (clock cycles) per loop iteration to execute this code? 1 2 3 4 5 6 7 8 9 10

24 CS61C L22 Pipelining II, Cache I (24) Chae, Summer 2008 © UCB Peer Instruction Answer (1/2) Assume 1 instr/clock, delayed branch, 5 stage pipeline, forwarding, interlock on unresolved load hazards. 10 3 iterations, so pipeline full. Loop:lw$t0, 0($s1) addu$t0, $t0, $s2 sw$t0, 0($s1) addiu$s1, $s1, -4 bne$s1, $zero, Loop nop How many pipeline stages (clock cycles) per loop iteration to execute this code? 1. 2. (data hazard so stall) 3. 4. 5. 7. (delayed branch so exec. nop) 8. 1 2 3 4 5 6 7 8 9 10 6. (!= in DCD)

25 CS61C L22 Pipelining II, Cache I (25) Chae, Summer 2008 © UCB Peer Instruction (2/2) Assume 1 instr/clock, delayed branch, 5 stage pipeline, forwarding, interlock on unresolved load hazards (after 10 3 loops, so pipeline full). Rewrite this code to reduce pipeline stages (clock cycles) per loop to as few as possible. Loop:lw $t0, 0($s1) addu $t0, $t0, $s2 sw $t0, 0($s1) addiu $s1, $s1, -4 bne $s1, $zero, Loop nop How many pipeline stages (clock cycles) per loop iteration to execute this code? 1 2 3 4 5 6 7 8 9 10

26 CS61C L22 Pipelining II, Cache I (26) Chae, Summer 2008 © UCB Peer Instruction (2/2) How long to execute? How many pipeline stages (clock cycles) per loop iteration to execute your revised code? (assume pipeline is full) Rewrite this code to reduce clock cycles per loop to as few as possible: Loop:lw$t0, 0($s1) addiu$s1, $s1, -4 addu$t0, $t0, $s2 bne$s1, $zero, Loop sw$t0, +4($s1) (no hazard since extra cycle) 1. 3. 4. 5. 2. (modified sw to put past addiu) 1 2 3 4 5 6 7 8 9 10

27 CS61C L22 Pipelining II, Cache I (27) Chae, Summer 2008 © UCB Administrivia HW5 due TODAY 7/29 Quiz9 due Wednesday 7/30 HW6 due Friday 8/1 Proj3 out soon, due next Tuesday 8/5 Will be hand graded in person, signups will be posted soon Midterm regrades due TODAY 7/29 Proj1 grades out, proj2 hopefully soon appeals due 7/31

28 CS61C L22 Pipelining II, Cache I (28) Chae, Summer 2008 © UCB Administrivia Lab on polling/interrupts is cancelled We will give everyone 4 pts on that lab Drop or grading option deadline August 1 summer.berkeley.edu for more details

29 CS61C L22 Pipelining II, Cache I (29) Chae, Summer 2008 © UCB The Big Picture Processor (active) Computer Control (“brain”) Datapath (“brawn”) Memory (passive) (where programs, data live when running) Devices Input Output Keyboard, Mouse Display, Printer Disk, Network

30 CS61C L22 Pipelining II, Cache I (30) Chae, Summer 2008 © UCB Memory Hierarchy Processor holds data in register file (~100 Bytes) Registers accessed on nanosecond timescale Memory (we’ll call “main memory”) More capacity than registers (~Gbytes) Access time ~50-100 ns Hundreds of clock cycles per memory access?! Disk HUGE capacity (virtually limitless) VERY slow: runs ~milliseconds Storage in computer systems:

31 CS61C L22 Pipelining II, Cache I (31) Chae, Summer 2008 © UCB Motivation: Why We Use Caches (written $) µProc 60%/yr. DRAM 7%/yr. 1 10 100 1000 19801981198319841985198619871988 1989 1990 1991199219931994199519961997199819992000 DRAM CPU 1982 Processor-Memory Performance Gap: (grows 50% / year) Performance 1989 first Intel CPU with cache on chip 1998 Pentium III has two levels of cache on chip

32 CS61C L22 Pipelining II, Cache I (32) Chae, Summer 2008 © UCB Memory Caching Mismatch between processor and memory speeds leads us to add a new level: a memory cache Implemented with same IC processing technology as the CPU (usually integrated on same chip): faster but more expensive than DRAM memory. Cache is a copy of a subset of main memory. Most processors have separate caches for instructions and data.

33 CS61C L22 Pipelining II, Cache I (33) Chae, Summer 2008 © UCB Memory Hierarchy Processor Size of memory at each level Increasing Distance from Proc., Decreasing speed Level 1 Level 2 Level n Level 3... Higher Lower Levels in memory hierarchy As we move to deeper levels the latency goes up and price per bit goes down.

34 CS61C L22 Pipelining II, Cache I (34) Chae, Summer 2008 © UCB Memory Hierarchy If level closer to Processor, it is: smaller faster subset of lower levels (contains most recently used data) Lowest Level (usually disk) contains all available data (or does it go beyond the disk?) Memory Hierarchy presents the processor with the illusion of a very large very fast memory.

35 CS61C L22 Pipelining II, Cache I (35) Chae, Summer 2008 © UCB Memory Hierarchy Analogy: Library (1/2) You’re writing a term paper (Processor) at a table in Doe Doe Library is equivalent to disk essentially limitless capacity very slow to retrieve a book Table is main memory smaller capacity: means you must return book when table fills up easier and faster to find a book there once you’ve already retrieved it

36 CS61C L22 Pipelining II, Cache I (36) Chae, Summer 2008 © UCB Memory Hierarchy Analogy: Library (2/2) Open books on table are cache smaller capacity: can have very few open books fit on table; again, when table fills up, you must close a book much, much faster to retrieve data Illusion created: whole library open on the tabletop Keep as many recently used books open on table as possible since likely to use again Also keep as many books on table as possible, since faster than going to library

37 CS61C L22 Pipelining II, Cache I (37) Chae, Summer 2008 © UCB Memory Hierarchy Basis Cache contains copies of data in memory that are being used. Memory contains copies of data on disk that are being used. Caches work on the principles of temporal and spatial locality. Temporal Locality: if we use it now, chances are we’ll want to use it again soon. Spatial Locality: if we use a piece of memory, chances are we’ll use the neighboring pieces soon.

38 CS61C L22 Pipelining II, Cache I (38) Chae, Summer 2008 © UCB Cache Design How do we organize cache? Where does each memory address map to? (Remember that cache is subset of memory, so multiple memory addresses map to the same cache location.) How do we know which elements are in cache? How do we quickly locate them?

39 CS61C L22 Pipelining II, Cache I (39) Chae, Summer 2008 © UCB Direct-Mapped Cache (1/4) In a direct-mapped cache, each memory address is associated with one possible block within the cache Therefore, we only need to look in a single location in the cache for the data if it exists in the cache Block is the unit of transfer between cache and memory

40 CS61C L22 Pipelining II, Cache I (40) Chae, Summer 2008 © UCB Direct-Mapped Cache (2/4) Cache Location 0 can be occupied by data from: Memory location 0, 4, 8,... 4 blocks  any memory location that is multiple of 4 Memory Memory Address 0 1 2 3 4 5 6 7 8 9 A B C D E F 4 Byte Direct Mapped Cache Cache Index 0 1 2 3 What if we wanted a block to be bigger than one byte? Block size = 1 byte

41 CS61C L22 Pipelining II, Cache I (41) Chae, Summer 2008 © UCB Direct-Mapped Cache (3/4) When we ask for a byte, the system finds out the right block, and loads it all! How does it know right block? How do we select the byte? E.g., Mem address 11101? How does it know WHICH colored block it originated from? What do you do at baggage claim? Memory Memory Address 0 2 4 6 8 A C E 10 12 14 16 18 1A 1C 1E 8 Byte Direct Mapped Cache Cache Index 0 1 2 3 01 23 etc Block size = 2 bytes 45 67 89

42 CS61C L22 Pipelining II, Cache I (42) Chae, Summer 2008 © UCB Direct-Mapped Cache (4/4) What should go in the tag? Do we need the entire address? -What do all these tags have in common? What did we do with the immediate when we were branch addressing, always count by bytes? Why not count by cache #? It’s useful to draw memory with the same width as the block size Memory (addresses shown) Memory Address 0 2 4 6 8 A C E 10 12 14 16 18 1A 1C 1E 8 Byte Direct Mapped Cache w/Tag! Cache Index 0 1 2 3 01 23 etc Tag Data (Block size = 2 bytes) 45 67 89 8 3 1E 14 0 1 2 3 Cache# 0 0 3 2

43 CS61C L22 Pipelining II, Cache I (43) Chae, Summer 2008 © UCB Issues with Direct-Mapped Since multiple memory addresses map to same cache index, how do we tell which one is in there? What if we have a block size > 1 byte? Answer: divide memory address into three fields ttttttttttttttttt iiiiiiiiii oooo tagindexbyte to checkto offset if have selectwithin correct blockblockblock

44 CS61C L22 Pipelining II, Cache I (44) Chae, Summer 2008 © UCB Direct-Mapped Cache Terminology All fields are read as unsigned integers. Index: specifies the cache index (which “row”/block of the cache we should look in) Offset: once we’ve found correct block, specifies which byte within the block we want Tag: the remaining bits after offset and index are determined; these are used to distinguish between all the memory addresses that map to the same location

45 CS61C L22 Pipelining II, Cache I (45) Chae, Summer 2008 © UCB TIO Dan’s great cache mnemonic AREA (cache size, B) = HEIGHT (# of blocks) * WIDTH (size of one block, B/block) WIDTH (size of one block, B/block) HEIGHT (# of blocks) AREA (cache size, B) 2 (H+W) = 2 H * 2 W Tag Index Offset

46 CS61C L22 Pipelining II, Cache I (46) Chae, Summer 2008 © UCB Direct-Mapped Cache Example (1/3) Suppose we have a 16KB of data in a direct-mapped cache with 4 word blocks Determine the size of the tag, index and offset fields if we’re using a 32-bit architecture Offset need to specify correct byte within a block block contains 4 words = 16 bytes = 2 4 bytes need 4 bits to specify correct byte

47 CS61C L22 Pipelining II, Cache I (47) Chae, Summer 2008 © UCB Direct-Mapped Cache Example (2/3) Index: (~index into an “array of blocks”) need to specify correct block in cache cache contains 16 KB = 2 14 bytes block contains 2 4 bytes (4 words) # blocks/cache =bytes/cache bytes/block = 2 14 bytes/cache 2 4 bytes/block =2 10 blocks/cache need 10 bits to specify this many blocks

48 CS61C L22 Pipelining II, Cache I (48) Chae, Summer 2008 © UCB Direct-Mapped Cache Example (3/3) Tag: use remaining bits as tag tag length = addr length – offset - index = 32 - 4 - 10 bits = 18 bits so tag is leftmost 18 bits of memory address Why not full 32 bit address as tag? All bytes within block need same address (4b) Index must be same for every address within a block, so it’s redundant in tag check, thus can leave off to save memory (here 10 bits)

49 CS61C L22 Pipelining II, Cache I (49) Chae, Summer 2008 © UCB Caching Terminology When we try to read memory, 3 things can happen: 1.cache hit: cache block is valid and contains proper address, so read desired word 2.cache miss: nothing in cache in appropriate block, so fetch from memory 3.cache miss, block replacement: wrong data is in cache at appropriate block, so discard it and fetch desired data from memory (cache always copy)

50 CS61C L22 Pipelining II, Cache I (50) Chae, Summer 2008 © UCB Peer instruction Consider an address split into fields for cache access as follows: How big are the cache blocks in words? 481632 How many entries does the cache have? 481632 How big is a cache entry? 24335556 tttttttttttttttttttttttiiiiiioooo

51 CS61C L22 Pipelining II, Cache I (51) Chae, Summer 2008 © UCB In Conclusion Pipeline challenge is hazards Forwarding helps w/many data hazards Delayed branch helps with control hazard in 5 stage pipeline Load delay slot / interlock necessary More aggressive performance: Superscalar Out-of-order execution Use caches to simulate fast large memory


Download ppt "CS61C L22 Pipelining II, Cache I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #22."

Similar presentations


Ads by Google