Download presentation
Presentation is loading. Please wait.
Published byJared Dickerson Modified over 6 years ago
1
The University of Adelaide, School of Computer Science
Computer Architecture A Quantitative Approach, Fifth Edition The University of Adelaide, School of Computer Science 10 June 2018 Chapter 2 Memory Hierarchy Design Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
2
The University of Adelaide, School of Computer Science
10 June 2018 Introduction Introduction Programmers want unlimited amounts of memory with low latency Fast memory technology is more expensive per bit than slower memory Solution: organize memory system into a hierarchy Entire addressable memory space available in largest, slowest memory Incrementally smaller and faster memories, each containing a subset of the memory below it, proceed in steps up toward the processor Temporal and spatial locality insures that nearly all references can be found in smaller memories Gives the allusion of a large, fast memory being presented to the processor Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
3
The University of Adelaide, School of Computer Science
10 June 2018 Memory Hierarchy Introduction Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
4
Memory Performance Gap
The University of Adelaide, School of Computer Science 10 June 2018 Memory Performance Gap Introduction Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
5
Memory Hierarchy Design
The University of Adelaide, School of Computer Science 10 June 2018 Memory Hierarchy Design Introduction Memory hierarchy design becomes more crucial with recent multi-core processors: Aggregate peak bandwidth grows with # cores: Intel Core i7 can generate two references per core per clock Four cores and 3.2 GHz clock 25.6 billion 64-bit data references/second + 12.8 billion 128-bit instruction references = GB/s! DRAM bandwidth is only 6% of this (25 GB/s) Requires: Multi-port, pipelined caches Two levels of cache per core Shared third-level cache on chip Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
6
Increasing Memory Bandwidth
4-word wide memory Miss penalty = = 17 bus cycles Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle 4-bank interleaved memory Miss penalty = ×1 = 20 bus cycles Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle Copyright © 2012, Elsevier Inc. All rights reserved.
7
The University of Adelaide, School of Computer Science
10 June 2018 Performance and Power Introduction High-end microprocessors have >10 MB on-chip cache Consumes large amount of area and power budget Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
8
Memory Hierarchy Basics
The University of Adelaide, School of Computer Science 10 June 2018 Memory Hierarchy Basics Introduction When a word is not found in the cache, a miss occurs: Fetch word from lower level in hierarchy, requiring a higher latency reference Lower level may be another cache or the main memory Also fetch the other words contained within the block Takes advantage of spatial locality Place block into cache in any location within its set, determined by address block address MOD number of sets Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
9
Memory Hierarchy Basics
The University of Adelaide, School of Computer Science 10 June 2018 Memory Hierarchy Basics Introduction n blocks per set => n-way set associative Direct-mapped cache => one block per set Fully associative => one set Writing to cache: two strategies Write-through Immediately update lower levels of hierarchy Write-back Only update lower levels of hierarchy when an updated block is replaced Both strategies use write buffer to make writes asynchronous Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
10
Copyright © 2012, Elsevier Inc. All rights reserved.
Direct Mapped Cache Location determined by address Direct mapped: only one choice (Block address) modulo (#Blocks in cache) #Blocks is a power of 2 Use low-order address bits Copyright © 2012, Elsevier Inc. All rights reserved.
11
Copyright © 2012, Elsevier Inc. All rights reserved.
Tags and Valid Bits How do we know which particular block is stored in a cache location? Store block address as well as the data Actually, only need the high-order bits Called the tag What if there is no data in a location? Valid bit: 1 = present, 0 = not present Initially 0 Copyright © 2012, Elsevier Inc. All rights reserved.
12
Copyright © 2012, Elsevier Inc. All rights reserved.
Cache Example 8-blocks, 1 word/block, direct mapped Initial state Index V Tag Data 000 N 001 010 011 100 101 110 111 Copyright © 2012, Elsevier Inc. All rights reserved.
13
Copyright © 2012, Elsevier Inc. All rights reserved.
Cache Example Word addr Binary addr Hit/miss Cache block 22 10 110 Miss 110 Index V Tag Data 000 N 001 010 011 100 101 110 Y 10 Mem[10110] 111 Copyright © 2012, Elsevier Inc. All rights reserved.
14
Copyright © 2012, Elsevier Inc. All rights reserved.
Cache Example Word addr Binary addr Hit/miss Cache block 26 11 010 Miss 010 Index V Tag Data 000 N 001 010 Y 11 Mem[11010] 011 100 101 110 10 Mem[10110] 111 Copyright © 2012, Elsevier Inc. All rights reserved.
15
Copyright © 2012, Elsevier Inc. All rights reserved.
Cache Example Word addr Binary addr Hit/miss Cache block 22 10 110 Hit 110 26 11 010 010 Index V Tag Data 000 N 001 010 Y 11 Mem[11010] 011 100 101 110 10 Mem[10110] 111 Copyright © 2012, Elsevier Inc. All rights reserved.
16
Copyright © 2012, Elsevier Inc. All rights reserved.
Cache Example Word addr Binary addr Hit/miss Cache block 16 10 000 Miss 000 3 00 011 011 Hit Index V Tag Data 000 Y 10 Mem[10000] 001 N 010 11 Mem[11010] 011 00 Mem[00011] 100 101 110 Mem[10110] 111 Copyright © 2012, Elsevier Inc. All rights reserved.
17
Copyright © 2012, Elsevier Inc. All rights reserved.
Cache Example Word addr Binary addr Hit/miss Cache block 18 10 010 Miss 010 Index V Tag Data 000 Y 10 Mem[10000] 001 N 010 Mem[10010] 011 00 Mem[00011] 100 101 110 Mem[10110] 111 Copyright © 2012, Elsevier Inc. All rights reserved.
18
Copyright © 2012, Elsevier Inc. All rights reserved.
Address Subdivision Copyright © 2012, Elsevier Inc. All rights reserved.
19
Associative Cache Example
Copyright © 2012, Elsevier Inc. All rights reserved.
20
Spectrum of Associativity
For a cache with 8 entries Copyright © 2012, Elsevier Inc. All rights reserved.
21
Associativity Example
Compare 4-block caches Direct mapped, 2-way set associative, fully associative Block access sequence: 0, 8, 0, 6, 8 Direct mapped Block address Cache index Hit/miss Cache content after access 1 2 3 miss Mem[0] 8 Mem[8] 6 Mem[6] Copyright © 2012, Elsevier Inc. All rights reserved.
22
Associativity Example
Block address Cache index Hit/miss Cache content after access Set 0 Set 1 miss Mem[0] 8 Mem[8] hit 6 Mem[6] Block address Hit/miss Cache content after access miss Mem[0] 8 Mem[8] hit 6 Mem[6] Copyright © 2012, Elsevier Inc. All rights reserved.
23
Set Associative Cache Organization
Copyright © 2012, Elsevier Inc. All rights reserved.
24
Copyright © 2012, Elsevier Inc. All rights reserved.
Replacement Policy Direct mapped: no choice Set associative Prefer non-valid entry, if there is one Otherwise, choose among entries in the set Least-recently used (LRU) Choose the one unused for the longest time Simple for 2-way, manageable for 4-way, too hard beyond that Random Gives approximately the same performance as LRU for high associativity Copyright © 2012, Elsevier Inc. All rights reserved.
25
Morgan Kaufmann Publishers
10 June, 2018 Finding a Block Associativity Location method Tag comparisons Direct mapped Index 1 n-way set associative Set index, then search entries within the set n Fully associative Search all entries #entries Full lookup table Hardware caches Reduce comparisons to reduce cost Virtual memory Full table lookup makes full associativity feasible Benefit in reduced miss rate Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
26
Morgan Kaufmann Publishers
10 June, 2018 Write Policy Write-through Update both upper and lower levels Simplifies replacement, but may require write buffer Write-back Update upper level only Update lower level when block is replaced Need to keep more state Virtual memory Only write-back is feasible, given disk write latency Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
27
Cache Design Trade-offs
Morgan Kaufmann Publishers 10 June, 2018 Cache Design Trade-offs Design change Effect on miss rate Negative performance effect Increase cache size Decrease capacity misses May increase access time Increase associativity Decrease conflict misses Increase block size Decrease compulsory misses Increases miss penalty. For very large block size, may increase miss rate due to pollution. Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
28
Morgan Kaufmann Publishers
10 June, 2018 Cache Control Example cache characteristics Direct-mapped, write-back, write allocate Block size: 4 words (16 bytes) Cache size: 16 KB (1024 blocks) 32-bit byte addresses Valid bit and dirty bit per block Blocking cache CPU waits until access is complete §5.9 Using a Finite State Machine to Control A Simple Cache Tag Index Offset 3 4 9 10 31 4 bits 10 bits 18 bits Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
29
Morgan Kaufmann Publishers
10 June, 2018 Interface Signals Cache Memory CPU Read/Write Read/Write Valid Valid 32 32 Address Address 32 128 Write Data Write Data 32 128 Read Data Read Data Ready Ready Multiple cycles per access Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
30
Morgan Kaufmann Publishers
10 June, 2018 Finite State Machines Use an FSM to sequence control steps Set of states, transition on each clock edge State values are binary encoded Current state stored in a register Next state = fn (current state, current inputs) Control output signals = fo (current state) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
31
Morgan Kaufmann Publishers
10 June, 2018 Cache Controller FSM Could partition into separate states to reduce clock cycle time Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
32
Cache Coherence Problem
Morgan Kaufmann Publishers 10 June, 2018 Cache Coherence Problem Suppose two CPU cores share a physical address space Write-through caches Time step Event CPU A’s cache CPU B’s cache Memory 1 CPU A reads X 2 CPU B reads X 3 CPU A writes 1 to X §5.10 Parallelism and Memory Hierarchies: Cache Coherence Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
33
Morgan Kaufmann Publishers
10 June, 2018 Coherence Defined Informally: Reads return most recently written value Formally: P writes X; P reads X (no intervening writes) read returns written value P1 writes X; P2 reads X (sufficiently later) read returns written value c.f. CPU B reading X after step 3 in example P1 writes X, P2 writes X all processors see writes in the same order End up with the same final value for X Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
34
Cache Coherence Protocols
Morgan Kaufmann Publishers 10 June, 2018 Cache Coherence Protocols Operations performed by caches in multiprocessors to ensure coherence Migration of data to local caches Reduces bandwidth for shared memory Replication of read-shared data Reduces contention for access Snooping protocols Each cache monitors bus reads/writes Directory-based protocols Caches and memory record sharing status of blocks in a directory Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
35
Invalidating Snooping Protocols
Morgan Kaufmann Publishers 10 June, 2018 Invalidating Snooping Protocols Cache gets exclusive access to a block when it is to be written Broadcasts an invalidate message on the bus Subsequent read in another cache misses Owning cache supplies updated value CPU activity Bus activity CPU A’s cache CPU B’s cache Memory CPU A reads X Cache miss for X CPU B reads X CPU A writes 1 to X Invalidate for X 1 CPU B read X Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
36
Morgan Kaufmann Publishers
10 June, 2018 Memory Consistency When are writes seen by other processors “Seen” means a read returns the written value Can’t be instantaneously Assumptions A write completes only when all processors have seen it A processor does not reorder writes with other accesses Consequence P writes X then writes Y all processors that see new Y also see new X Processors can reorder reads, but not writes Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
37
Multilevel On-Chip Caches
Morgan Kaufmann Publishers 10 June, 2018 Multilevel On-Chip Caches §5.13 The ARM Cortex-A8 and Intel Core i7 Memory Hierarchies Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
38
2-Level TLB Organization
Morgan Kaufmann Publishers 10 June, 2018 2-Level TLB Organization Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
39
Supporting Multiple Issue
Morgan Kaufmann Publishers 10 June, 2018 Supporting Multiple Issue Both have multi-banked caches that allow multiple accesses per cycle assuming no bank conflicts Core i7 cache optimizations Return requested word first Non-blocking cache Hit under miss Miss under miss Data prefetching Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
40
Memory Hierarchy Basics
The University of Adelaide, School of Computer Science 10 June 2018 Memory Hierarchy Basics Introduction Miss rate Fraction of cache access that result in a miss Causes of misses Compulsory First reference to a block Capacity Blocks discarded and later retrieved Conflict Program makes repeated references to multiple addresses from different blocks that map to the same location in the cache Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
41
Memory Hierarchy Basics
The University of Adelaide, School of Computer Science 10 June 2018 Memory Hierarchy Basics Introduction Note that speculative and multithreaded processors may execute other instructions during a miss Reduces performance impact of misses Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
42
Memory Hierarchy Basics
The University of Adelaide, School of Computer Science 10 June 2018 Memory Hierarchy Basics Introduction Six basic cache optimizations: Larger block size Reduces compulsory misses Increases capacity and conflict misses, increases miss penalty Larger total cache capacity to reduce miss rate Increases hit time, increases power consumption Higher associativity Reduces conflict misses Higher number of cache levels Reduces overall memory access time Giving priority to read misses over writes Reduces miss penalty Avoiding address translation in cache indexing Reduces hit time Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
43
Ten Advanced Optimizations
The University of Adelaide, School of Computer Science 10 June 2018 Ten Advanced Optimizations Small and simple first level caches Critical timing path: addressing tag memory, then comparing tags, then selecting correct set Direct-mapped caches can overlap tag compare and transmission of data Lower associativity reduces power because fewer cache lines are accessed Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
44
L1 Size and Associativity
The University of Adelaide, School of Computer Science 10 June 2018 L1 Size and Associativity Advanced Optimizations Access time vs. size and associativity Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
45
L1 Size and Associativity
The University of Adelaide, School of Computer Science 10 June 2018 L1 Size and Associativity Advanced Optimizations Energy per read vs. size and associativity Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
46
The University of Adelaide, School of Computer Science
10 June 2018 Way Prediction To improve hit time, predict the way to pre-set mux Mis-prediction gives longer hit time Prediction accuracy > 90% for two-way > 80% for four-way I-cache has better accuracy than D-cache First used on MIPS R10000 in mid-90s Used on ARM Cortex-A8 Extend to predict block as well “Way selection” Increases mis-prediction penalty Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
47
The University of Adelaide, School of Computer Science
10 June 2018 Pipelining Cache Pipeline cache access to improve bandwidth Examples: Pentium: 1 cycle Pentium Pro – Pentium III: 2 cycles Pentium 4 – Core i7: 4 cycles Increases branch mis-prediction penalty Makes it easier to increase associativity Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
48
The University of Adelaide, School of Computer Science
10 June 2018 Nonblocking Caches Allow hits before previous misses complete “Hit under miss” “Hit under multiple miss” L2 must support this In general, processors can hide L1 miss penalty but not L2 miss penalty Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
49
The University of Adelaide, School of Computer Science
10 June 2018 Multibanked Caches Organize cache as independent banks to support simultaneous access ARM Cortex-A8 supports 1-4 banks for L2 Intel i7 supports 4 banks for L1 and 8 banks for L2 Interleave banks according to block address Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
50
Critical Word First, Early Restart
The University of Adelaide, School of Computer Science 10 June 2018 Critical Word First, Early Restart Critical word first Request missed word from memory first Send it to the processor as soon as it arrives Early restart Request words in normal order Send missed work to the processor as soon as it arrives Effectiveness of these strategies depends on block size and likelihood of another access to the portion of the block that has not yet been fetched Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
51
The University of Adelaide, School of Computer Science
10 June 2018 Merging Write Buffer When storing to a block that is already pending in the write buffer, update write buffer Reduces stalls due to full write buffer Do not apply to I/O addresses Advanced Optimizations No write buffering Write buffering Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
52
Compiler Optimizations
The University of Adelaide, School of Computer Science 10 June 2018 Compiler Optimizations Loop Interchange Swap nested loops to access memory in sequential order Blocking Instead of accessing entire rows or columns, subdivide matrices into blocks Requires more memory accesses but improves locality of accesses Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
53
The University of Adelaide, School of Computer Science
10 June 2018 Hardware Prefetching Fetch two blocks on miss (include next sequential block) Advanced Optimizations Pentium 4 Pre-fetching Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
54
The University of Adelaide, School of Computer Science
10 June 2018 Compiler Prefetching Insert prefetch instructions before data is needed Non-faulting: prefetch doesn’t cause exceptions Register prefetch Loads data into register Cache prefetch Loads data into cache Combine with loop unrolling and software pipelining Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
55
The University of Adelaide, School of Computer Science
10 June 2018 Summary Advanced Optimizations Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
56
The University of Adelaide, School of Computer Science
10 June 2018 Memory Technology Memory Technology Performance metrics Latency is concern of cache Bandwidth is concern of multiprocessors and I/O Access time Time between read request and when desired word arrives Cycle time Minimum time between unrelated requests to memory DRAM used for main memory, SRAM used for cache Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
57
The University of Adelaide, School of Computer Science
10 June 2018 Memory Technology Memory Technology SRAM Requires low power to retain bit Requires 6 transistors/bit DRAM Must be re-written after being read Must also be periodically refeshed Every ~ 8 ms Each row can be refreshed simultaneously One transistor/bit Address lines are multiplexed: Upper half of address: row access strobe (RAS) Lower half of address: column access strobe (CAS) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
58
The University of Adelaide, School of Computer Science
10 June 2018 Memory Technology Memory Technology Amdahl: Memory capacity should grow linearly with processor speed Unfortunately, memory capacity and speed has not kept pace with processors Some optimizations: Multiple accesses to same row Synchronous DRAM Added clock to DRAM interface Burst mode with critical word first Wider interfaces Double data rate (DDR) Multiple banks on each DRAM device Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
59
The University of Adelaide, School of Computer Science
10 June 2018 Memory Optimizations Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
60
The University of Adelaide, School of Computer Science
10 June 2018 Memory Optimizations Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
61
The University of Adelaide, School of Computer Science
10 June 2018 Memory Optimizations Memory Technology DDR: DDR2 Lower power (2.5 V -> 1.8 V) Higher clock rates (266 MHz, 333 MHz, 400 MHz) DDR3 1.5 V 800 MHz DDR4 1-1.2 V 1600 MHz GDDR5 is graphics memory based on DDR3 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
62
The University of Adelaide, School of Computer Science
10 June 2018 Memory Optimizations Memory Technology Graphics memory: Achieve 2-5 X bandwidth per DRAM vs. DDR3 Wider interfaces (32 vs. 16 bit) Higher clock rate Possible because they are attached via soldering instead of socketted DIMM modules Reducing power in SDRAMs: Lower voltage Low power mode (ignores clock, continues to refresh) Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
63
Memory Power Consumption
The University of Adelaide, School of Computer Science 10 June 2018 Memory Power Consumption Memory Technology Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
64
The University of Adelaide, School of Computer Science
10 June 2018 Flash Memory Memory Technology Type of EEPROM Must be erased (in blocks) before being overwritten Non volatile Limited number of write cycles Cheaper than SDRAM, more expensive than disk Slower than SRAM, faster than disk Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
65
Copyright © 2012, Elsevier Inc. All rights reserved.
Flash Types NOR flash: bit cell like a NOR gate Random read/write access Used for instruction memory in embedded systems NAND flash: bit cell like a NAND gate Denser (bits/area), but block-at-a-time access Cheaper per GB Used for USB keys, media storage, … Flash bits wears out after 1000’s of accesses Not suitable for direct RAM or disk replacement Wear leveling: remap data to less used blocks Copyright © 2012, Elsevier Inc. All rights reserved.
66
The University of Adelaide, School of Computer Science
10 June 2018 Memory Dependability Memory Technology Memory is susceptible to cosmic rays Soft errors: dynamic errors Detected and fixed by error correcting codes (ECC) Hard errors: permanent errors Use sparse rows to replace defective rows Chipkill: a RAID-like error recovery technique Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
67
The University of Adelaide, School of Computer Science
10 June 2018 Virtual Memory Protection via virtual memory Keeps processes in their own memory space Role of architecture: Provide user mode and supervisor mode Protect certain aspects of CPU state Provide mechanisms for switching between user mode and supervisor mode Provide mechanisms to limit memory accesses Provide TLB to translate addresses Virtual Memory and Virtual Machines Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
68
The University of Adelaide, School of Computer Science
10 June 2018 Virtual Machines Supports isolation and security Sharing a computer among many unrelated users Enabled by raw speed of processors, making the overhead more acceptable Allows different ISAs and operating systems to be presented to user programs “System Virtual Machines” SVM software is called “virtual machine monitor” or “hypervisor” Individual virtual machines run under the monitor are called “guest VMs” Virtual Memory and Virtual Machines Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
69
Impact of VMs on Virtual Memory
The University of Adelaide, School of Computer Science 10 June 2018 Impact of VMs on Virtual Memory Each guest OS maintains its own set of page tables VMM adds a level of memory between physical and virtual memory called “real memory” VMM maintains shadow page table that maps guest virtual addresses to physical addresses Requires VMM to detect guest’s changes to its own page table Occurs naturally if accessing the page table pointer is a privileged operation Virtual Memory and Virtual Machines Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 — Instructions: Language of the Computer
70
Copyright © 2012, Elsevier Inc. All rights reserved.
Address Translation Fixed-size pages (e.g., 4K) Copyright © 2012, Elsevier Inc. All rights reserved.
71
Copyright © 2012, Elsevier Inc. All rights reserved.
Page Fault Penalty On page fault, the page must be fetched from disk Takes millions of clock cycles Handled by OS code Try to minimize page fault rate Fully associative placement Smart replacement algorithms Copyright © 2012, Elsevier Inc. All rights reserved.
72
Copyright © 2012, Elsevier Inc. All rights reserved.
Page Tables Stores placement information Array of page table entries, indexed by virtual page number Page table register in CPU points to page table in physical memory If page is present in memory PTE stores the physical page number Plus other status bits (referenced, dirty, …) If page is not present PTE can refer to location in swap space on disk Copyright © 2012, Elsevier Inc. All rights reserved.
73
Translation Using a Page Table
Copyright © 2012, Elsevier Inc. All rights reserved.
74
Mapping Pages to Storage
Copyright © 2012, Elsevier Inc. All rights reserved.
75
Replacement and Writes
To reduce page fault rate, prefer least-recently used (LRU) replacement Reference bit (aka use bit) in PTE set to 1 on access to page Periodically cleared to 0 by OS A page with reference bit = 0 has not been used recently Disk writes take millions of cycles Block at once, not individual locations Write through is impractical Use write-back Dirty bit in PTE set when page is written Copyright © 2012, Elsevier Inc. All rights reserved.
76
Fast Translation Using a TLB
Address translation would appear to require extra memory references One to access the PTE Then the actual memory access But access to page tables has good locality So use a fast cache of PTEs within the CPU Called a Translation Look-aside Buffer (TLB) Typical: 16–512 PTEs, 0.5–1 cycle for hit, 10–100 cycles for miss, 0.01%–1% miss rate Misses could be handled by hardware or software Copyright © 2012, Elsevier Inc. All rights reserved.
77
Fast Translation Using a TLB
Copyright © 2012, Elsevier Inc. All rights reserved.
78
Copyright © 2012, Elsevier Inc. All rights reserved.
TLB Misses If page is in memory Load the PTE from memory and retry Could be handled in hardware Can get complex for more complicated page table structures Or in software Raise a special exception, with optimized handler If page is not in memory (page fault) OS handles fetching the page and updating the page table Then restart the faulting instruction Copyright © 2012, Elsevier Inc. All rights reserved.
79
Copyright © 2012, Elsevier Inc. All rights reserved.
TLB Miss Handler TLB miss indicates Page present, but PTE not in TLB Page not preset Must recognize TLB miss before destination register overwritten Raise exception Handler copies PTE from memory to TLB Then restarts instruction If page not present, page fault will occur Copyright © 2012, Elsevier Inc. All rights reserved.
80
Copyright © 2012, Elsevier Inc. All rights reserved.
Page Fault Handler Use faulting virtual address to find PTE Locate page on disk Choose page to replace If dirty, write to disk first Read page into memory and update page table Make process runnable again Restart from faulting instruction Copyright © 2012, Elsevier Inc. All rights reserved.
81
TLB and Cache Interaction
If cache tag uses physical address Need to translate before cache lookup Alternative: use virtual address tag Complications due to aliasing Different virtual addresses for shared physical address Copyright © 2012, Elsevier Inc. All rights reserved.
82
Copyright © 2012, Elsevier Inc. All rights reserved.
Memory Protection Different tasks can share parts of their virtual address spaces But need to protect against errant access Requires OS assistance Hardware support for OS protection Privileged supervisor mode (aka kernel mode) Privileged instructions Page tables and other state information only accessible in supervisor mode System call exception (e.g., syscall in MIPS) Copyright © 2012, Elsevier Inc. All rights reserved.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.