COMPSYS 304 Computer Architecture Memory Management Units Reefed down - heading for Great Barrier Island.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

OS Fall’02 Virtual Memory Operating Systems Fall 2002.
4/14/2017 Discussed Earlier segmentation - the process address space is divided into logical pieces called segments. The following are the example of types.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
CSIE30300 Computer Architecture Unit 10: Virtual Memory Hsin-Chou Chi [Adapted from material by and
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
CMPT 300: Final Review Chapters 8 – Memory Management: Ch. 8, 9 Address spaces Logical (virtual): generated by the CPU Physical: seen by the memory.
CS 333 Introduction to Operating Systems Class 12 - Virtual Memory (2) Jonathan Walpole Computer Science Portland State University.
OS Spring ‘04 Paging and Virtual Memory Operating Systems Spring 2004.
CS 333 Introduction to Operating Systems Class 11 – Virtual Memory (1)
CS 333 Introduction to Operating Systems Class 12 - Virtual Memory (2) Jonathan Walpole Computer Science Portland State University.
Memory Management (II)
1 Lecture 14: Virtual Memory Topics: virtual memory (Section 5.4) Reminders: midterm begins at 9am, ends at 10:40am.
Recap. The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of the.
Paging and Virtual Memory. Memory management: Review  Fixed partitioning, dynamic partitioning  Problems Internal/external fragmentation A process can.
Chapter 8 Operating System Support (Continued)
CMPT 300: Final Review Chapters 8 – Memory Management: Ch. 8, 9 Address spaces Logical (virtual): generated by the CPU Physical: seen by the memory.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
Computer Organization and Architecture
Virtual Memory I Chapter 8.
Virtual Memory Topics Virtual Memory Access Page Table, TLB Programming for locality Memory Mountain Revisited.
©UCB CS 161 Ch 7: Memory Hierarchy LECTURE 24 Instructor: L.N. Bhuyan
CS 333 Introduction to Operating Systems Class 12 - Virtual Memory (2) Jonathan Walpole Computer Science Portland State University.
Paging. Memory Partitioning Troubles Fragmentation Need for compaction/swapping A process size is limited by the available physical memory Dynamic growth.
CS333 Intro to Operating Systems Jonathan Walpole.
Computer Architecture Key Points
Review of Memory Management, Virtual Memory CS448.
Lecture 19: Virtual Memory
Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island.
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Operating Systems ECE344 Ding Yuan Paging Lecture 8: Paging.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-8 Memory Management (2) Department of Computer Science and Software.
Memory Management Fundamentals Virtual Memory. Outline Introduction Motivation for virtual memory Paging – general concepts –Principle of locality, demand.
Virtual Memory Part 1 Li-Shiuan Peh Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology May 2, 2012L22-1
Computer Architecture Key Points John Morris Electrical & Computer Enginering/ Computer Science, The University of Auckland Iolanthe II drifts off Waiheke.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
Virtual Memory 1 1.
4.3 Virtual Memory. Virtual memory  Want to run programs (code+stack+data) larger than available memory.  Overlays programmer divides program into pieces.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Demand Paging Reference Reference on UNIX memory management
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
Constructive Computer Architecture Virtual Memory: From Address Translation to Demand Paging Arvind Computer Science & Artificial Intelligence Lab. Massachusetts.
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
ECE232: Hardware Organization and Design
Virtual Memory User memory model so far:
From Address Translation to Demand Paging
Page Table Implementation
Chapter 8: Main Memory.
CSCI206 - Computer Organization & Programming
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Virtual Memory Overcoming main memory size limitation
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE451 Virtual Memory Paging Autumn 2002
Translation Buffers (TLB’s)
CSC3050 – Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Winter 2005 Page Tables, TLBs, and Other Pragmatics Steve Gribble 1.
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Operating Systems: Internals and Design Principles, 6/E
Review What are the advantages/disadvantages of pages versus segments?
4.3 Virtual Memory.
Virtual Memory 1 1.
Presentation transcript:

COMPSYS 304 Computer Architecture Memory Management Units Reefed down - heading for Great Barrier Island

Memory Management Unit Physical and Virtual Memory Physical memory presents a flat address space Addresses 0 to 2 p -1 p = number of bits in an address word User programs accessing this space  Conflicts in  multi-user ( eg Unix)  multi-process ( eg Real-Time systems) systems  Virtual Address Space  Each user has a “private” address space  Addresses 0 to 2 q -1  q and p not necessarily the same!

Memory Management Unit èVirtual Address Space Each user has a “private” address space User D’s Address Space

Virtual Addresses Memory Management Unit CPU (running user programs) emits Virtual Address MMU translates it to Physical Address Operating System allocates pages of physical memory to users

Virtual Addresses Mappings between user space and physical memory created by OS

Memory Management Unit (MMU) Responsible for VIRTUAL  PHYSICAL address mapping Sits between CPU and cache Cache operates on Physical Addresses (mostly - some research on VA cache) CPU MMU Cache Main Mem D or I VA PA D or I

MMU - operation OS constructs page tables - one for each user Page address from memory address selects a page table entry Page table entry contains physical page address

MMU - operation q-k

MMU - Virtual memory space Page Table Entries can also point to disc blocks Valid bit Set: page in memory address is physical page address Cleared: page “swapped out” address is disc block address MMU hardware generates page fault when swapped out page is requested Allows virtual memory space to be larger than physical memory Only “working set” is in physical memory Remainder on paging disc

MMU - Page Faults Page Fault Handler Part of OS kernel Finds usable physical page LRU algorithm Writes it back to disc if modified Reads requested page from paging disc Adjusts page table entries Memory access re-tried

Page Fault q-k

MMU - Page Faults Page Fault Handler Part of OS kernel Finds usable physical page LRU algorithm Writes it back to disc if modified Reads requested page from paging disc Adjusts page table entries Memory access re-tried Can be an expensive process! Usual to allow page tables to be swapped out too!  Page fault can be generated on the page tables!

MMU - practicalities Page size 8 kbyte pages  k = 13 p = 32  p - k = 19 So page table size 2 19 = 0.5 x 10 6 entries Each entry 4 bytes  0.5 x 10 6 × 4 = 2 Mbytes! Page tables can take a lot of memory! Larger page sizes reduce page table size but can waste space Access one byte  whole page needed!

MMU - practicalities Page tables are stored in main memory They’re too large to be in smaller memories! MMU needs to read PT for address translation  Address translation can require additional memory accesses! >32-bit address space?

MMU - Virtual Address Space Size Segments, etc Virtual address spaces >2 32 bytes are common Intel x bit address space PowerPC - 52 bit address space PowerPC address formation

MMU - Protection Page table entries Access control bits Read Write Read/Write Execute only Program - “text” in Unix Read only Data Read write

MMU - Alternative Page Table Styles Inverted Page tables One page table entry (PTE) / page of physical memory MMU has to search for correct VA entry  PowerPC hashes VA  PTE address PTE address = h( VA ) h – hash function Hashing  collisions Hash functions in hardware “hash” of n bits to produce m bits Usually m < n Fewer bits reduces information content

MMU - Alternative Page Table Styles Hash functions in hardware “hash” of n bits to produce m bits Usually m < n Fewer bits reduces information content There are only 2 m distinct values now! Some n -bit patterns will reduce to the same m -bit patterns Trivial example 2-bits  1-bit with xor h(x 1 x 0 ) = x 1 xor x 0 y h(y) Collisions

MMU - Alternative Page Table Styles Inverted Page tables One page table entry (PTE) / page of physical memory MMU has to search for correct VA entry  PowerPC hashes VA  PTE address PTE address = h( VA ) h – hash function Hashing  collisions PTEs are linked together PTE contains tags (like cache) and link bits MMU searches linked list to find correct entry Smaller Page Tables / Longer searches

MMU - Sharing Can map virtual addresses from different user spaces to same physical pages  Sharing of data Commonly done for frequently used programs Unix “sticky bit” Text editors, eg vi Compilers Any other program used by multiple users  More efficient memory use  Less paging  Better use of cache

Address Translation - Speeding it up Two+ memory accesses for each datum? Page table (single - 3 level tables) Actual data 1  System running slower than an 8088? Solution Translation Look-Aside Buffer (TLB or TLAB) Small cache of recently-used page table entries Usually fully-associative Can be quite small!

Address Translation - Speeding it up TLB sizes MIPS R entries MIPS R entries HP PA entries Pentium 4 (Prescott) entries One page table entry / page of data Locality of reference Programs spend a lot of time in same memory region  TLB hit rates tend to be very high 98%  Compensate for cost of a miss (many memory accesses – but for only 2% of references to memory!)

TLB - Coverage How much of the address space is ‘covered’ by the TLB? ‪ ie how large is the address space for which the TLB will ensure fast translations? All others will need to fetch PTEs from cache, main memory or ( hopefully not too often! ) disc Coverage = number of TLB entries × page size eg 128 entry TLB, 8 kbyte pages Coverage = 2 7 × 2 13 = 2 20 ~ 1 Mbyte Critical factor in program performance Working data set size > TLB coverage could be bad news!

TLB - Coverage Luckily, sequential access is fine! Example: large (several MByte) array of doubles (8 byte floating point values) 8kbyte pages  1024 doubles/page Sequential access, eg sum all values: for(j=0;j<n;j++) sum = sum + x[j] First access to each page  TLB miss but now TLB is loaded with data for this page, so Next 1023 accesses  TLB hits  TLB Hit Rate = 1023/1024 ~ 99.9%

TLB – Avoid thrashing! n  n float matrix accessed by columns For n  2048, what is the TLB hit rate? Each element is on a new page! TLB hit rate = 0% General rule: Access matrices by rows if possible

Memory Hierarchy - Operation Search = Look up if standard PT or Search if inverted PT Hit = Page found in RAM