Chapter 21 Swapping: Mechanisms Chien-Chung Shen CIS/UD

Slides:



Advertisements
Similar presentations
CSCC69: Operating Systems
Advertisements

Virtual Memory Management G. Anuradha Ref:- Galvin.
Lecture 7 Memory Management. Virtual Memory Approaches Time Sharing: one process uses RAM at a time Static Relocation: statically rewrite code before.
Chapter 19 Translation Lookaside Buffer Chien-Chung Shen CIS, UD
CS 5600 Computer Systems Lecture 7: Virtual Memory.
Lecture 8 Memory Management. Paging Too slow -> TLB Too big -> multi-level page table What if all that stuff does not fit into memory?
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Memory Management Design & Implementation Segmentation Chapter 4.
Translation Buffers (TLB’s)
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
CS 153 Design of Operating Systems Spring 2015 Final Review.
Lecture 19: Virtual Memory
Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS, UD
Chapter 18 Paging Chien-Chung Shen CIS, UD
Memory Management Fundamentals Virtual Memory. Outline Introduction Motivation for virtual memory Paging – general concepts –Principle of locality, demand.
Virtual Memory Part 1 Li-Shiuan Peh Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology May 2, 2012L22-1
Review (1/2) °Caches are NOT mandatory: Processor performs arithmetic Memory stores data Caches simply make data transfers go faster °Each level of memory.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 9: Virtual Memory.
Chapter 21 Swapping: Mechanisms Chien-Chung Shen CIS, UD
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
4.3 Virtual Memory. Virtual memory  Want to run programs (code+stack+data) larger than available memory.  Overlays programmer divides program into pieces.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
PA1 due in one week TA office hour is 1-3 PM The grade would be available before that.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Lecture 7 TLB. Virtual Memory Approaches Time Sharing Static Relocation Base Base+Bounds Segmentation Paging.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 9: Virtual-Memory Management.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Background Virtual memory – separation of user logical memory.
Chapter 18 Paging Chien-Chung Shen CIS/UD
Chapter 20 Smaller Tables Chien-Chung Shen CIS, UD
Virtual Memory.
Virtual Memory Chapter 8.
Chapter 19 Translation Lookaside Buffer
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
SLC/VER1.0/OS CONCEPTS/OCT'99
Paging Review Page tables store translations from virtual pages to physical frames Issues: Speed - TLBs Size – Multi-level page tables.
Virtual Memory Chapter 7.4.
Review: Memory Virtualization
Virtualization Virtualize hardware resources through abstraction CPU
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Virtual Memory: the Page Table and Page Swapping
Memory Caches & TLB Virtual Memory
Chapter 9: Virtual Memory – Part I
Section 9: Virtual Memory (VM)
Christo Wilson Lecture 7: Virtual Memory
Chapter 21 Virtual Memoey: Policies
Today How was the midterm review? Lab4 due today.
PA1 is out Best by Feb , 10:00 pm Enjoy early
Morgan Kaufmann Publishers
Paging Review: Virtual address spaces are broken into pages
CS510 Operating System Foundations
O.S Lecture 13 Virtual Memory.
Evolution in Memory Management Techniques
Lecture 6 Memory Management
Computer Architecture
Lecture 29: Virtual Memory-Address Translation
Translation Lookaside Buffer
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Translation Buffers (TLB’s)
Christo Wilson Lecture 7: Virtual Memory
Virtual Memory Prof. Eric Rotenberg
Translation Buffers (TLB’s)
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Review What are the advantages/disadvantages of pages versus segments?
4.3 Virtual Memory.
Virtual Memory 1 1.
Presentation transcript:

Chapter 21 Swapping: Mechanisms Chien-Chung Shen CIS/UD

Go Beyond Physical Memory How to support many concurrently-running large address spaces ? –OS needs a place to stash away portions of address spaces that currently aren’t in great demand (where ?) swap space on hard disk drive –how can OS make use of a larger, slower device (disk) to transparently provide the illusion of a large virtual address space virtual memory (with swap space on disk) –why support a single large address space for process ? convenience and ease of use

Swap Space Reserved space on disk for moving pages back and forth –remember disk address of a given page –How many processes? Which one is not running? –“Code” page(s) of a.out are initially on disk

TLB Algorithm (Review) Running process generates virtual memory references Valid implies present as all pages in memory

Present Bit and Page Fault When hardware looks in PTE, it may find page is not present in physical memory –present bit == 0 (page is not in memory) –page fault The act of accessing a page that is not in physical memory Upon page fault, OS page-fault handler (for both hardware-managed and software-managed TLBs) runs Why not hardware handle page fault ? –disk is too slow and too much details to handle Where to find the desired page ? –page table (indicating either PFN or disk address)

When Memory Is Full OS pages out one or more pages to make room for new page(s) OS is about to page in –page-replacement policy –disk-like speed vs. memory-like speed (10,000 or 100,000 times slower)

Page Fault Control Flow hardware Software control flow // valid and present // run page-fault handler

Page Fault Control Flow VPN = (VirtualAddress & VPN_MASK) >> SHIFT (Success, TlbEntry) = TLB_Lookup(VPN) if (Success == True) // TLB Hit if (CanAccess(TlbEntry.ProtectBits) == True) Offset = VirtualAddress & OFFSET_MASK PhysAddr = (TlbEntry.PFN << SHIFT) | Offset Register = AccessMemory(PhysAddr) else RaiseException(PROTECTION_FAULT) Else // TLB Miss PTEAddr = PTBR + (VPN * sizeof(PTE)) PTE = AccessMemory(PTEAddr) if (PTE.Valid == False) RaiseException(SEGMENTATION_FAULT) else if (CanAccess(PTE.ProtectBits) == False) RaiseException(PROTECTION_FAULT) else if (PTE.Present == True) // assuming hardware-managed TLB TLB_Insert(VPN, PTE.PFN, PTE.ProtectBits) RetryInstruction() else RaiseException(PAGE_FAULT)

When Replacement Occurs OS proactively keeps a small amount of memory free by having high watermark (HW) and low watermark (LW) to help decide when to start evicting pages from memory When OS notices that there are fewer than LW pages available, a background thread (swap daemon or page daemon) that is responsible for freeing memory runs to evict pages until there are HW pages available Cluster or group a number of pages and write them out at once to the swap space, thus increasing the efficiency of the disk (by reducing seek and rotational overheads of disk)