Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Organization & Design 计算机组成与设计

Similar presentations


Presentation on theme: "Computer Organization & Design 计算机组成与设计"— Presentation transcript:

1 Computer Organization & Design 计算机组成与设计
Weidong Wang (王维东) College of Information Science & Electronic Engineering 信息与通信网络工程研究所(ICAN) Zhejiang University

2 Course Information Instructor: Weidong WANG TA:
Tel(O): ; Office Hours: TBD, Yuquan Campus, Xindian (High-Tech) Building 306 Mobile: TA: mobile, 陈 彬彬 Binbin CHEN, ; 陈 佳云 Jiayun CHEN, ; Office Hours: Wednesday & Saturday 14:00-16:30 PM. Xindian (High-Tech) Building 308.(也可以短信邮件联系) 微信号-“2017计组群”

3 Lecture 11 Virtual Memory

4 Motivation #1: Large Address Space for Each Executing Program
Each program thinks it has a ~232 byte address space of its own May not use it all though Available main memory may be much smaller 1.heap是堆区,stack是栈区。 2.stack的空间由操作系统自动分配和释放,是自动分配变量,以及函数调用的时候所使用的一些空间(参数值,局部变量的值)。地址是由高向低减少的。 3.heap的空间是手动申请和释放的,地址是由低向高增长的。heap常用new关键字来分配。是由malloc之类函数分配的空间所在地。 4.stack空间有限,heap的空间是很大的自由区。

5 Motivation #2: Memory Management for Multiple Programs
At an point in time, a computer may be running multiple programs E.g., Firefox + Thunderbird Questions: How do we share memory between multiple programs? How do we avoid address conflicts? How do we protect programs Isolations and selective sharing

6 DRAM vs. SRAM as a “Cache”
DRAM vs. disk is more extreme than SRAM vs. DRAM Access latencies: DRAM ~10X slower than SRAM Disk ~100,000X slower than DRAM Importance of exploiting spatial locality First byte is ~100,000X slower than successive bytes on disk vs, ~4X improvement for page-mode vs. regular accesses to DRAM

7 1) Memory Address Translation
7

8 Memory Management From early absolute addressing schemes, to modern virtual memory systems with support for virtual machine monitors Can separate into orthogonal正交的 functions: Translation转换(mapping of virtual address to physical address) Protection保护(permission to access word in memory) Virtual memory虚拟内存(transparent extension of memory space using slower disk storage) But most modern systems provide support for all the above functions with a single page-based system 8

9 Absolute Addresses EDSAC, early 50’s
Only one program ran at a time, with unrestricted无限制的access to entire machine (RAM + I/O devices) Addresses in a program depended upon where the program was to be loaded in memory But it was more convenient方便的for programmers to write location-independent位置独立 subroutines How could location independence be achieved? Location independence: Load time rewriting PC relative branches PC relative data access or base pointer on entry to routine. Linker and/or loader modify addresses of subroutines and callers when building a program memory image 9 CS252 S05 9

10 Bare Machine PC Physical Address D E M Physical Address W Inst. Cache Decode Data Cache + Memory Controller Physical Address Physical Address Physical Address Main Memory (DRAM) In a bare machine, the only kind of address is a physical address 10

11 Dynamic Address Translation
Motivation动机 In the early machines, I/O operations were slow and each word transferred involved the CPU Higher throughput if CPU and I/O of 2 or more programs were overlapped. How?multiprogramming with DMA I/O devices, interrupts Location-independent定位programs Programming and storage management ease  need for a base register Protection Independent programs should not affect each other inadvertently非故意地  need for a bound register Multiprogramming drives requirement for resident supervisor software to manage context switches between multiple programs prog1 Physical Memory prog2 OS 11

12 Simple Base and Bound Translation
Segment Length Bound Register Bounds Violation? 违犯 Physical Address current segment Physical Memory Effective Address Load X + Base Register Base Physical Address Program Address Space Base and bounds registers are visible/accessible only when processor is running in the supervisor管理mode 12

13 Separate Areas for Program and Data
Load X Program Address Space Data Bound Register Bounds Violation? data segment Logical Address Mem. Address Register Data Base Register + Physical Address Main Memory Program Bound Register Bounds Violation? Logical Address program segment Program Counter Permits sharing of program segments. Program Base Register + Physical Address What is an advantage of this separation? (Scheme used on all Cray vector supercomputers prior to X1, 2002) 13 CS252 S05 13

14 Base and Bound Machine   + +
Prog. Bound Register Data Bound Register Bounds Violation? Bounds Violation? Logical Address Logical Address PC D E M W Decode Data Cache Inst. Cache + + + Physical Address Physical Address Program Base Register Data Base Register Physical Address Physical Address Memory Controller Physical Address Main Memory (DRAM) [ Can fold交叠addition of base register into (base+offset) calculation using a carry-save保留进位adder (sums three numbers with only a few gate delays more than adding two numbers) ] 14

15 Memory Fragmentation内存碎片
free Users 4 & 5 arrive Users 2 & 5 leave OS Space 16K 24K 32K user 1 user 2 user 3 OS Space 16K 24K 32K user 1 user 2 user 3 user 5 user 4 8K OS Space 16K 24K 32K user 1 user 4 8K user 3 Called Burping the memory. As users come and go, the storage is “fragmented”. Therefore, at some stage programs have to be moved around to compact the storage. 15 CS252 S05 15

16 Paged分页的 Memory Systems
Processor-generated address can be split into: page number offset A page table contains the physical address of the base of each page: 1 2 3 1 1 Physical Memory 2 2 3 3 Address Space of User-1 Page Table of User-1 Relaxes the contiguous allocation requirement. Page tables make it possible to store the pages of a program non-contiguously. 16 CS252 S05 16

17 Private专用Address Space per User
Page Table User 2 User 3 Physical Memory free OS pages OS ensures that the page tables are disjoint. Each user has a page table Page table contains an entry入口for each user page 17 CS252 S05 17

18 Where Should Page Tables Reside?
Space required by the page tables (PT) is proportional成比例的to the address space, number of users, ...  Space requirement is large Too expensive to keep in registers Idea: Keep PTs in the main memory needs one reference to retrieve the page base address and another to access the data word  doubles the number of memory references! 18

19 Page Tables in Physical Memory
VA1 User 1 Virtual Address Space User 2 Virtual Address Space PT User 1 PT User 2 Physical Memory 19

20 A Problem in the Early Sixties六十年代
There were many applications whose data could not fit in the main memory, e.g., payroll工资表 Paged memory system reduced fragmentation but still required the whole program to be resident in the main memory Programmers moved the data back and forth from the secondary store by overlaying it repeatedly on the primary store tricky复杂的programming! 20

21 Manual Overlays覆盖 Assume an instruction can address all the storage on the drum磁鼓-指硬盘 Method 1: programmer keeps track of addresses in the main memory and initiates an I/O transfer when required Difficult, error-prone倾向的! Method 2: automatic initiation of I/O transfers by software address translation Brooker’s interpretive coding, 1960 Inefficient!低效的 40k bits main 640k bits drum Central Store Ferranti Mercury 1956 British Firm Ferranti, did Mercury and then Atlas Method 1 too difficult for users Method 2 too slow. Not just an ancient black art, e.g., IBM Cell microprocessor using in Playstation-3 has explicitly managed local store! 21 CS252 S05 21

22 Demand Paging in Atlas (1962)
“A page from secondary storage is brought into the primary storage whenever it is (implicitly) demanded by the processor.” Tom Kilburn Secondary (Drum) 32x6 pages Primary 32 Pages 512 words/page Central Memory Primary memory as a cache for secondary memory Single-level Store User sees 32 x 6 x 512 words of storage 22 CS252 S05 22

23 Hardware Organization of Atlas
16 ROM pages 0.4 ~1 sec system code (not swapped) system data Effective Address Initial Address Decode 2 subsidiary pages 1.4 sec PARs 48-bit words 512-word pages 1 Page Address Register (PAR) per page frame Main 32 pages 1.4 sec Drum (4) 192 pages 8 Tape decks 88 sec/word 31 <effective PN , status> Compare the effective page address against all 32 PARs match  normal access no match  page fault save the state of the partially executed instruction 23

24 Atlas Demand Paging Scheme
On a page fault: Input transfer into a free page is initiated The Page Address Register (PAR) is updated If no free page is left, a page is selected to be replaced (based on usage) The replaced page is written on the drum to minimize drum latency effect, the first empty page on the drum was selected The page table is updated to point to the new location of the page on the drum 24

25 Linear Page Table Page Table Entry条目(PTE页表项 ) contains:
Data word Data Pages Page Table Entry条目(PTE页表项 ) contains: A bit to indicate if a page exists PPN (physical page number) for a memory-resident page DPN (disk page number) for a page on the disk Status bits for protection and usage OS sets the Page Table Base Register whenever active user process changes Page Table PPN PPN DPN PPN Offset DPN PPN PPN DPN DPN VPN DPN PPN PPN PT Base Register VPN Offset Virtual address 25

26 Size of Linear Page Table
With 32-bit addresses, 4-KB pages & 4-byte PTEs: 220=1M PTEs, i.e, 4 MB page table per user 4 GB of swap needed to back up full virtual address space Larger pages? Internal fragmentation (Not all memory in page is used) Larger page fault penalty惩罚(more time to read from disk) What about 64-bit virtual address space??? Even 1MB pages would require byte PTEs (35 TB!) What is the “saving grace魅力 ” ? 4K=12bit Virtual address space is large but only a small fraction of the pages are populated. So we can use a sparse representation of the table. 26 CS252 S05 26

27 Hierarchical Page Table
Virtual Address 31 22 21 12 11 p p offset 10-bit L1 index 10-bit L2 index offset Root of the Current Page Table p2 p1 Physical Memory (Processor Register) Level 1 Page Table Level 2 Page Tables page in primary memory page in secondary memory PTE of a nonexistent page Data Pages 27

28 Two-Level Page Tables in Physical Memory
Virtual Address Spaces Level 1 PT User 1 VA1 User 1 Level 1 PT User 2 Level 2 PT User 1 User2/VA1 VA1 User1/VA1 User 2 Level 2 PT User 2 28

29 Address Translation & Protection
Virtual Address Virtual Page No. (VPN) offset Kernel/User Mode Protection Check Read/Write Address Translation Exception? Physical Address Physical Page No. (PPN) offset Every instruction and data access needs address translation and protection checks A good VM design needs to be fast (~ one cycle) and space efficient 29

30 Fast Translation using a TLB 转译后备缓冲器 (Page Table Entry)
页表缓存、转址旁路缓存 30

31 Fast Translation using a TLB
31

32 Translation Lookaside Buffers (TLB) 转换旁视缓冲器
Address translation is very expensive! In a two-level page table, each reference becomes several memory accesses Solution: Cache translations in TLB TLB hit  Single-Cycle Translation TLB miss  Page-Table Walk to refill virtual address VPN offset V R W D tag PPN (VPN = virtual page number) 3 memory references 2 page faults (disk accesses) + .. Actually used in IBM before paged memory. (PPN = physical page number) hit? physical address PPN offset 32 CS252 S05 32

33 TLB Designs Typically 32-128 entries, usually fully associative
Each entry maps a large page, hence less spatial locality across pages  more likely that two entries conflict Sometimes larger TLBs ( entries) are 4-8 way set-associative Larger systems sometimes have multi-level (L1 and L2) TLBs Random or FIFO replacement policy No process information in TLB? TLB Reach: Size of largest virtual address space that can be simultaneously mapped by TLB Example: 64 TLB entries, 4KB pages, one page per entry TLB Reach = _____________________________________________? 64 entries * 4 KB = 256 KB (if contiguous) 33

34 TLB Entries The TLB is a cache for page table entries (PTE)
The data for a TLB entry ( == a PTE entry) Physical page number (frame #) Access rights (R/W bits) Any other PTE information (dirty bit, LRU info, etc) The tags for a TLB entry Physical page number Portion of it not used for indexing into the TLB Valid bit LRU bits If TLB is associative and LRU replacement is used Least Recently Used最近最少使用算法

35 TLB Case Study: MIPS R2000/R3000
35

36 TLB Misses If page is in memory If page is not in memory (page fault)
Load the PTE from memory and retry Could be handled in hardware Can get complex for more complicated page table structures Or in software Raise a special exception, with optimized handler This is what MIPS does using a special vectored interrupt If page is not in memory (page fault) OS handles fetching the page and updating the page table Then restart the faulting instruction

37 Handling a TLB Miss Software (MIPS, Alpha)
TLB miss causes an exception and the operating system walks the page tables and reloads TLB. A privileged “untranslated” addressing mode used for walk Hardware (SPARC v8, x86, PowerPC) A memory management unit (MMU) walks the page tables and reloads the TLB If a missing (data or PT) page is encountered during the TLB reloading, MMU gives up and signals a Page-Fault exception for the original instruction 37

38 TLB & Memory Hierarchies
Once address is translated, it used to access memory hierarchy A hierarchy of caches (L1, L2, etc)

39 Hierarchical Page Table Walk: SPARC v8
Virtual Address Index Index Index Offset Context Table Register root ptr PTP PTE Context Table L1 Table L2 Table L3 Table Physical Address PPN Offset MMU does this table walk in hardware on a TLB miss 39

40 TLB and Cache Interaction
Basic process Use TLB to get VA Use VA to access caches and DRAM Question: can you ever access the TLB and the cache in parallel?

41 Page-Based Virtual-Memory Machine (Hardware Page-Table Walk)
Page Fault? Protection violation? Page Fault? Protection violation? Virtual Address Virtual Address Physical Address Physical Address PC D E M W Inst. TLB Decode Data TLB Inst. Cache Data Cache + Miss? Miss? Page-Table Base Register Hardware Page Table Walker Memory Controller Physical Address Physical Address Physical Address Main Memory (DRAM) Assumes page tables held in untranslated physical memory 41

42 Address Translation: putting it all together
Virtual Address hardware hardware or software software TLB Lookup miss hit Page Table Walk Protection Check the page is Ï memory Î memory denied permitted Need to restart instruction. Soft and hard page faults. Page Fault (OS loads page) Protection Fault Update TLB Physical Address (to cache) Where? SEGFAULT 42 CS252 S05 42

43 TLB Caveats注意事项 What happens to the TLB when switching between programs The OS must flush the entries in the TLB Large number of TLB misses after every switch Alternatively, use PIDs (process ID) in each TLB entry Allows entries from multiple programs to co-exist Gradual replacement Limited reach 64 entry TLB with 8KB pages maps 0.5MB Smaller than many L2 caches in most systems TLB miss rate > L2 miss rate! Potential solutions Multilevel TLBs ( just like multi-level caches) Larger pages

44 Page Size Tradeoff Larger Pages Advantages Disadvantages Smaller Pages
Smaller page tables Fewer page faults and more efficient transfer with larger applications Improved TLB coverage Disadvantages Higher internal fragmentation Smaller Pages Improved time to start up small processes with fewer pages Internal fragmentation is low (important for small programs) High overhead in large page tables General trend toward larger pages 1978:512B, 1984:4KB, 1990:16KB, 2000:64KB

45 Multiple Page Sizes Many machines support multiple page sizes
SPARC: 8KB, 64KB, 1MB, 4MB MIPS R4000: 4KB -16MB Page size dependent upon application OS kernel used large pages User applications use smaller pages Issues Software complexity TLB complexity

46 Page Table Size 46

47 Two-level Page Tables 47

48 Address Summary Protection and translation required for multiprogramming Base and bounds was early simple scheme Page-based translation and protection avoids need for memory compaction, easy allocation by OS But need to indirect in large page table on every access Address spaces accessed sparsely Can use multi-level page table to hold translation/protection information, but implies multiple memory accesses per reference Address space access with locality Can use “translation lookaside buffer” (TLB) to cache address translations (sometimes known as address translation cache) Still have to walk page tables on TLB miss, can be hardware or software talk Virtual memory uses DRAM as a “cache” of disk memory, allows very cheap main memory 48

49 2) Virtual Memory

50 2)Virtual Memory Motivation large address space for each program
Memory management for multiple programs 50

51 Virtual Memory 51

52 A simple memory system 52

53 Modern Virtual Memory Systems Illusion of a large, private, uniform store
Protection & Privacy私密 several users, each with their private address space and one or more shared address spaces page table  name space Demand Paging Provides the ability to run programs larger than the primary memory Hides differences in machine configurations The price is address translation on each memory reference OS useri Swapping Store Primary Memory Portability on machines with different memory configurations. mapping VA PA TLB 53 CS252 S05 53

54 Hierarchical Page Table
Virtual Address 31 22 21 12 11 p p offset 10-bit L1 index 10-bit L2 index offset Root of the Current Page Table p2 p1 Physical Memory (Processor Register) Level 1 Page Table Level 2 Page Tables page in primary memory page in secondary memory PTE of a nonexistent page Data Pages 54

55 A System with Virtual-Memory
55

56 Page-Based Virtual-Memory Machine (Hardware Page-Table Walk)
Page Fault? Protection violation? Page Fault? Protection violation? Virtual Address Virtual Address Physical Address Physical Address PC D E M W Inst. TLB Decode Data TLB Inst. Cache Data Cache + Miss? Miss? Page-Table Base Register Hardware Page Table Walker Memory Controller Physical Address Physical Address Physical Address Main Memory (DRAM) Assumes page tables held in untranslated physical memory 56

57 Locating an Object in a “Cache”
57

58 Locating an Object in a “Cache”
58

59 59

60 Address Translation: putting it all together
Virtual Address hardware hardware or software software Restart instruction TLB Lookup miss hit Page Table Walk Protection Check the page is Ï memory Î memory denied Need to restart instruction. Soft and hard page faults. permitted Page Fault (OS loads page) Protection Fault Update TLB Physical Address (to cache) SEGFAULT 60 CS252 S05 60

61 61

62 62

63 Handling VM-related exceptions
PC D E M W Inst TLB Inst. Cache Decode Data TLB Data Cache + TLB miss? Page Fault? Protection violation? TLB miss? Page Fault? Protection violation? Handling a TLB miss needs a hardware or software mechanism to refill TLB Handling a page fault (e.g., page is on disk) needs a restartable exception so software handler can resume after retrieving page Precise exceptions are easy to restart Can be imprecise but restartable, but this complicates OS software Handling protection violation违反 may abort process But often handled the same as a page fault 63

64 Address Translation in CPU Pipeline
PC D E M W Inst TLB Inst. Cache Decode Data TLB Data Cache + TLB miss? Page Fault? Protection violation? TLB miss? Page Fault? Protection violation? Need to cope with additional latency of TLB: slow down the clock? pipeline the TLB and cache access? virtual address caches parallel TLB/cache access 64

65 Translation: High Level View
65

66 Translation: Process 66

67 Translation Process Explained
67

68 Virtual-Address Caches
CPU Physical Cache TLB Primary Memory VA PA Alternative: place the cache before the TLB CPU VA (StrongARM) Virtual Cache PA TLB Primary Memory one-step process in case of a hit (+) cache needs to be flushed冲洗 on a context switch unless address space identifiers (ASIDs) included in tags (-) aliasing problems due to the sharing of pages (-) maintaining cache coherence (-) (see later in course) 68

69 Virtually Addressed Cache (Virtual Index/Virtual Tag)
Virtual Address Virtual Address PC D E M W Decode Inst. Cache Data Cache + Miss? Miss? Inst. TLB Page-Table Base Register Hardware Page Table Walker Data TLB Physical Address Physical Address Memory Controller Instruction data Physical Address Main Memory (DRAM) Translate on miss 69

70 Aliasing混叠现象in Virtual-Address Caches
Page Table Tag Data VA1 Data Pages VA1 1st Copy of Data at PA PA VA2 2nd Copy of Data at PA VA2 Virtual cache can have two copies of same physical data. Writes to one copy not visible to reads of other! Two virtual pages share one physical page General Solution: Prevent aliases coexisting in cache Software (i.e., OS) solution for direct-mapped cache VAs of shared pages must agree in cache index bits; this ensures all VAs accessing same PA will conflict in direct-mapped cache (early SPARCs) Two processes sharing the same file, Map the same memory segment to different Parts of their address space. 70 CS252 S05 70

71 VM: Replacement and Writes
71

72 Concurrent并发Access to TLB & Cache (Virtual Index/Physical Tag)
VPN L b TLB Direct-map Cache 2L blocks 2b-byte block PPN Page Offset = hit? Data Physical Tag Tag VA PA Virtual Index k Index L is available without consulting the TLB cache and TLB accesses can begin simultaneously! Tag comparison is made after both accesses are completed Cases: L + b = k, L + b < k, L + b > k 72

73 Virtual-Index Physical-Tag Caches: Associative Organization
VPN a L = k-b b TLB Direct-map 2L blocks PPN Page Offset = hit? Data Phy. Tag VA PA Virtual Index k 2a Consider 4-Kbyte pages and caches with 32-byte blocks 32-Kbyte cache  2a = 8 4-Mbyte cache  2a = No ! After the PPN is known, 2a physical tags are compared How does this scheme scale to larger caches? 73 CS252 S05 73

74 Concurrent同时发生Access to TLB & Large L1 The problem with L1 > Page size
Virtual Index L1 PA cache Direct-map VA VPN a Page Offset b TLB VA1 PPNa Data VA2 PPNa Data PA PPN Page Offset b = hit? If they differ in the lower ‘a’ bits alone, and share a physical page. Tag Can VA1 and VA2 both map to PA ? 74 CS252 S05 74

75 A solution via Second Level Cache
CPU L1 Instruction Cache Unified L2 Cache Memory Memory Memory L1 Data Cache RF Memory Usually a common L2 cache backs up both Instruction and Data L1 caches L2 is “inclusive” of both Instruction and Data caches Inclusive means L2 has copy of any line in either L1 75

76 Anti-Aliasing Using L2: MIPS R10000
Virtual Index L1 PA cache Direct-map VA VPN a Page Offset b into L2 tag VA1 PPNa Data TLB VA2 PPNa Data PA PPN Page Offset b PPN = hit? Tag Suppose VA1 and VA2 both map to PA and VA1 is already in L1, L2 (VA1  VA2) After VA2 is resolved to PA, a collision will be detected in L2. VA1 will be purged清洗 from L1 and L2, and VA2 will be loaded  no aliasing ! PA a Data Direct-Mapped L2 76

77 Anti-Aliasing using L2 for a Virtually Addressed L1
Index & Tag VA VPN Page Offset b VA1 Data TLB VA2 Data PA PPN Page Offset b L1 VA Cache “Virtual Tag” Tag Physical Index & Tag PA VA1 Data Physically-addressed L2 can also be used to avoid aliases in virtually-addressed L1 L2 PA Cache L2 “contains” L1 77

78 Page Fault Handler When the referenced page is not in DRAM:
The missing page is located (or created) It is brought in from disk, and page table is updated Another job may be run on the CPU while the first job waits for the requested page to be read from disk If no free pages are left, a page is swapped out Pseudo-LRU replacement policy Since it takes a long time to transfer a page (msecs), page faults are handled completely in software by the OS Untranslated addressing mode is essential to allow kernel to access page tables 78

79 Atlas Revisited One PAR for each physical page
PAR’s contain the VPN’s of the pages resident in primary memory Advantage: The size is proportional to the size of the primary memory What is the disadvantage ? PAR’s PPN VPN 79

80 VM features track historical uses:
Bare machine, only physical addresses One program owned entire machine Batch-style multiprogramming Several programs sharing CPU while waiting for I/O Base & bound: translation and protection between programs (not virtual memory) Problem with external fragmentation (holes in memory), needed occasional memory defragmentation as new jobs arrived Time sharing More interactive programs, waiting for user. Also, more jobs/second. Motivated move to fixed-size page translation and protection, no external fragmentation (but now internal fragmentation, wasted bytes in page) Motivated adoption of virtual memory to allow more jobs to share limited physical memory resources while holding working set in memory Virtual Machine Monitors Run multiple operating systems on one machine Idea from 1970s IBM mainframes, now common on laptops e.g., run Windows on top of Mac OS X Hardware support for two levels of translation/protection Guest OS virtual -> Guest OS physical -> Host machine physical 82

81 Virtual Memory Use Today - 1
Servers/desktops/laptops/smartphones have full demand-paged virtual memory Portability between machines with different memory sizes Protection between multiple users or multiple tasks Share small physical memory among active tasks Simplifies implementation of some OS features Vector supercomputers超级计算机 have translation and protection but rarely complete demand-paging (Older Crays: base&bound, Japanese & Cray X1/X2: pages) Don’t waste expensive CPU time thrashing to disk (make jobs fit in memory) Mostly run in batch mode (run set of jobs that fits in memory) Difficult to implement restartable vector instructions 83

82 Virtual Memory Use Today - 2
Most embedded processors and DSPs provide physical addressing only Can’t afford area/speed/power budget for virtual memory support Often there is no secondary storage to swap to! Programs custom written for particular memory configuration in product Difficult to implement restartable instructions for exposed architectures 84

83 Virtual Memory Summary
Use hard disk ( or Flash) as large storage for data of all programs Main memory (DRAM) is a cache for the disk Managed jointly by hardware and the operating system (OS) Each running program has its own virtual address space Address space as shown in previous figure Protected from other programs Frequently-used portions of virtual address space copied to DRAM DRAM = physical address space Hardware + OS translate virtual addresses (VA) used by program to physical addresses (PA) used by the hardware Translation enables relocation & protection

84 HomeWork Readings: /exercises/ Read Chapter 6.1-6.14;
Read Parallel Processors from Client to Cloud.pdf; /exercises/ Project 2 HW3: 12月21日交 HW4: 1月4日交 do exercises, 5.11, 5.12, 5.14, 5.18. Computer Organization and Design (COD) (Fifth Edition)

85 Acknowledgements These slides contain material from courses: UCB CS152
Stanford EE108B Also MIT course 6.823


Download ppt "Computer Organization & Design 计算机组成与设计"

Similar presentations


Ads by Google