CSE 451: Operating Systems Fall 2010 Module 10 Memory Management Adapted from slides by Chia-Chi Teng.

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management Basic memory management Swapping
Advertisements

Memory.
Part IV: Memory Management
CS 153 Design of Operating Systems Spring 2015
CS 153 Design of Operating Systems Spring 2015
Memory Management (II)
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Memory Management.
Memory Management 2010.
Chapter 3.2 : Virtual Memory
Memory Management (continued) CS-3013 C-term Memory Management CS-3013 Operating Systems C-term 2008 (Slides include materials from Operating System.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory ManagementCS-502 Fall Memory Management CS-502 Operating Systems Fall 2006 (Slides include materials from Operating System Concepts, 7 th.
CS 333 Introduction to Operating Systems Class 9 - Memory Management
Computer Organization and Architecture
Memory ManagementCS-3013 C-term Memory Management CS-3013 Operating Systems C-term 2008 (Slides include materials from Operating System Concepts,
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Memory Management CSCI 444/544 Operating Systems Fall 2008.
Memory Management CSE451 Andrew Whitaker. Big Picture Up till now, we’ve focused on how multiple programs share the CPU Now: how do multiple programs.
CSE451 Operating Systems Winter 2012 Memory Management Mark Zbikowski Gary Kimura.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Operating Systems Chapter 8
1 Memory Management Memory Management COSC513 – Spring 2004 Student Name: Nan Qiao Student ID#: Professor: Dr. Morteza Anvari.
CS333 Intro to Operating Systems Jonathan Walpole.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Operating System Main Memory.
Operating Systems ECE344 Ding Yuan Memory Management Lecture 7: Memory Management.
Chapter 7 Memory Management
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming  To allocate scarce memory resources.
CE Operating Systems Lecture 14 Memory management.
CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Virtual Memory 1 1.
Paging (continued) & Caching CS-3013 A-term Paging (continued) & Caching CS-3013 Operating Systems A-term 2008 (Slides include materials from Modern.
CSE 451: Operating Systems Winter 2015 Module 11 Memory Management Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska, Levy,
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory.
CS703 - Advanced Operating Systems By Mr. Farhan Zaidi.
CS161 – Design and Architecture of Computer
CSE 120 Principles of Operating
CSE 451: Operating Systems Winter 2007 Module 10 Memory Management
CS703 - Advanced Operating Systems
CSE451 Operating Systems Winter 2011
CSE 451: Operating Systems Winter 2010 Module 10 Memory Management
CSE 451: Operating Systems Winter 2014 Module 11 Memory Management
Chapter 8: Main Memory.
Virtual Memory Partially Adapted from:
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Operating System Main Memory
CS399 New Beginnings Jonathan Walpole.
Lecture 3: Main Memory.
CSE 451: Operating Systems Autumn 2005 Memory Management
Operating System Chapter 7. Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Winter 2004 Module 10.5 Segmentation
CSE 451: Operating Systems Autumn 2004 Memory Management
CSE 451: Operating Systems Winter 2007 Module 10 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2009 Module 10 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management CSE451 Andrew Whitaker.
CSE 451: Operating Systems Winter 2004 Module 10 Memory Management
CS703 - Advanced Operating Systems
CSE451 Operating Systems Winter 2009
COMP755 Advanced Operating Systems
CSE 451: Operating Systems Autumn 2010 Module 10 Memory Management
Presentation transcript:

CSE 451: Operating Systems Fall 2010 Module 10 Memory Management Adapted from slides by Chia-Chi Teng

11/12/20152 Goals of memory management Allocate scarce memory resources among competing processes, maximizing memory utilization and system throughput Provide a convenient abstraction for programming (and for compilers, etc.) Provide isolation between processes –we have come to view “addressability” and “protection” as inextricably linked, even though they’re really orthogonal

11/12/20153 Tools of memory management Base and limit registers Swapping Paging (and page tables and TLBs) Segmentation (and segment tables) Page fault handling => Virtual memory The policies that govern the use of these mechanisms

11/12/20154 Today’s desktop and server systems The basic abstraction that the OS provides for memory management is virtual memory (VM) –VM enables programs to execute without requiring their entire address space to be resident in physical memory program can also execute on machines with less RAM than it “needs” –many programs don’t need all of their code or data at once (or ever) e.g., branches they never take, or data they never read/write no need to allocate memory for it, OS should adjust amount allocated based on run-time behavior –virtual memory isolates processes from each other one process cannot name addresses visible to others; each process has its own isolated address space

11/12/20155 Virtual memory requires hardware and OS support –MMU’s, TLB’s, page tables, page fault handling, … Typically accompanied by swapping, and at least limited segmentation

11/12/20156 A trip down Memory Lane … Why? –Because it’s instructive –Because embedded processors (98% or more of all processors) typically don’t have virtual memory First, there was job-at-a-time batch programming –programs used physical addresses directly –OS loads job (perhaps using a relocating loader to “offset” branch addresses), runs it, unloads it –what if the program wouldn’t fit into memory? manual overlays! An embedded system may have only one program!

11/12/20157 Swapping –save a program’s entire state (including its memory image) to disk –allows another program to be loaded into memory and run –saved program can be swapped back in and re-started right where it was The first timesharing system, MIT’s “Compatible Time Sharing System” (CTSS), was a uni-programmed swapping system –only one memory-resident user –upon request completion or quantum expiration, a swap took place

11/12/20158 Then came multiprogramming –multiple processes/jobs in memory at once to overlap I/O and computation increase CPU utilization –memory management requirements: protection: restrict which addresses processes can use, so they can’t stomp on each other fast translation: memory lookups must be fast, in spite of the protection scheme fast context switching: when switching between jobs, updating memory hardware (protection and translation) must be quick

11/12/20159 Virtual addresses for multiprogramming To make it easier to manage memory of multiple processes, make processes use virtual addresses (which is not what we mean by “virtual memory” today!) –virtual addresses are independent of location in physical memory (RAM) where referenced data lives OS determines location in physical memory –instructions issued by CPU reference virtual addresses e.g., pointers, arguments to load/store instructions, PC … –virtual addresses are translated by hardware into physical addresses (with some setup from OS)

11/12/ The set of virtual addresses a process can reference is its address space –many different possible mechanisms for translating virtual addresses to physical addresses we’ll take a historical walk through them, ending up with our current techniques Note: We are not yet talking about paging, or virtual memory – only that the program issues addresses in a virtual address space, and these must be “adjusted” to reference memory (the physical address space) –for now, think of the program as having a contiguous virtual address space that starts at 0, and a contiguous physical address space that starts somewhere else

11/12/ Old technique #1: Fixed partitions Physical memory is broken up into fixed partitions –partitions may have different sizes, but partitioning never changes –hardware requirement: base register, limit register physical address = virtual address + base register base register loaded by OS when it switches to a process –how do we provide protection? if (physical address > base + limit) then… ? Advantages –Simple Problems –internal fragmentation: the available partition is larger than what was requested –external fragmentation: two small partitions left, but one big job – what sizes should the partitions be?? partition 0 partition 1 partition 2 partition 3 0 2K 6K 8K 12K physical memory

11/12/ Mechanics of fixed partitions partition 0 partition 1 partition 2 partition 3 0 2K 6K 8K 12K physical memory offset + virtual address P2’s base: 6K base register 2K <? no raise protection fault limit register yes

11/12/ Old technique #2: Variable partitions Obvious next step: physical memory is broken up into partitions dynamically – partitions are tailored to programs –hardware requirements: base register, limit register –physical address = virtual address + base register –how do we provide protection? if (physical address > base + limit) then… ? Advantages –no internal fragmentation simply allocate partition size to be just big enough for process (assuming we know what that is!) Problems –external fragmentation as we load and unload jobs, holes are left scattered throughout physical memory slightly different than the external fragmentation for fixed partition systems

11/12/ Mechanics of variable partitions partition 0 partition 1 partition 2 partition 3 partition 4 physical memory offset + virtual address P3’s base base register P3’s size limit register <? raise protection fault no yes

11/12/ Dealing with fragmentation partition 0 partition 1 partition 2 partition 3 partition 4 Swap a program out Re-load it, adjacent to another Adjust its base register “Lather, rinse, repeat” Ugh “Compaction” partition 0 partition 1 partition 2 partition 3 partition 4

Placement algorithm Variable and dynamic partitions –First fit –Best fit –Next fit Buddy system Read the textbook 11/12/201516

11/12/ Modern technique: Paging Solve the external fragmentation problem by using fixed sized units in both physical and virtual memory frame 0 frame 1 frame 2 frame Y physical address space … page 0 page 1 page 2 page X virtual address space … page 3

11/12/ User’s perspective Processes view memory as a contiguous address space from bytes 0 through N –virtual address space (VAS) In reality, virtual pages are scattered across physical memory frames – not contiguous as earlier –virtual-to-physical mapping –this mapping is invisible to the program Protection is provided because a program cannot reference memory outside of its VAS –the virtual address 0xDEADBEEF maps to different physical addresses for different processes Note: Assume for now that all pages of the address space are resident in memory – no “page faults”

11/12/ Address translation Translating virtual addresses –a virtual address has two parts: virtual page number & offset –virtual page number (VPN) is index into a page table –page table entry contains page frame number (PFN) –physical address is PFN::offset Page tables –managed by the OS –map virtual page number (VPN) to page frame number (PFN) VPN is simply an index into the page table –one page table entry (PTE) per page in virtual address space i.e., one PTE per VPN

11/12/ Mechanics of address translation page frame 0 page frame 1 page frame 2 page frame Y … page frame 3 physical memory offset physical address page frame # page table offset virtual address virtual page #

Example of address translation Assume 32 bit addresses –assume page size is 4KB (4096 bytes, or 2 12 bytes) –VPN is 20 bits long (2 20 VPNs), offset is 12 bits long Let’s translate virtual address 9000 decimal –VPN is 2, and offset is 808 decimal –assume page table entry 2 contains value 4 page frame number is 4 VPN 2 maps to PFN 4 –physical address = PF base + offset = 4* = = page frame 0 page frame 1 page frame 2 page frame Y … page frame 3 physical memory 808 physical address = 4 * = page table 808 virtual address 9000 = 2* page frame 4 virtual page 0 virtual page 1 virtual page 2 virtual page … virtual page 3 Virtual address space virtual page 4 0 4G

11/12/ Example of address translation How about in binary? Still assume 32 bit addresses –assume page size is 4KB (4096 bytes, or 2 12 bytes) –VPN is 20 bits long (2 20 VPNs), offset is 12 bits long Let’s translate virtual address 0x –VPN is 0x13325, and offset is 0x328 –assume page table entry 0x13325 contains value 0x03004 page frame number is 0x03004 VPN 0x13325 maps to PFN 0x03004 –physical address = PFN::offset = 0x

Exercise Considering a simple paging system with 2 20 bytes of physical memory, 2 28 bytes of virtual address space, and page size of 1K bytes. –How many entries in the page table? –How many bits in the virtual address used to specify the page #? –How many bits in the physical address used to specify the frame #? 2 28 /1K = 2 18, or 256M/1K = 256K 28 – 10 = – 10 = 10

Exercise 11/12/ Virtual page #ValidPhysical Frame # Considering the following page table. –Assume the page size is 2048 bytes, all numbers are decimal and zero based. What is the physical address for the virtual address 2096? VPN = 1 Offset = 2096 – 2048 = 48 PF = 4 Physical address = 4 * = 8240

11/12/ Page Table Entries (PTEs) PTE’s control mapping –the valid bit says whether or not the PTE can be used says whether or not a virtual address is valid it is checked each time a virtual address is used –the referenced bit says whether the page has been accessed it is set when a page has been read or written to –the modified bit says whether or not the page is dirty it is set when a write to the page has occurred –the protection bits control which operations are allowed read, write, execute –the page frame number determines the physical page physical page start address = PFN page frame numberprotMRV

11/12/ Paging advantages Easy to allocate physical memory –physical memory is allocated from free list of frames to allocate a frame, just remove it from the free list –external fragmentation is not a problem! managing variable-sized allocations is a huge pain in the neck –“buddy system” Leads naturally to virtual memory –entire program need not be memory resident –take page faults using “valid” bit –but paging was originally introduced to deal with external fragmentation, not to allow programs to be partially resident

11/12/ Paging disadvantages Can still have internal fragmentation –process may not use memory in exact multiples of pages Memory reference overhead –2 references per address lookup (page table, then memory) –solution: use a hardware cache to absorb page table lookups translation lookaside buffer (TLB) – next class Memory required to hold page tables can be large –need one PTE per page in virtual address space –32 bit AS with 4KB pages = 2 20 PTEs = 1,048,576 PTEs –4 bytes/PTE = 4MB per page table OS’s typically have separate page tables per process 25 processes = 100MB of page tables –solution: page the page tables (!!!) (ow, my brain hurts)

Multi-level page table x86 4KB page architecture 11/12/201528

4MB page size extension 11/12/201529

11/12/ Segmentation (We will be back to paging soon!) Paging –mitigates various memory allocation complexities (e.g., fragmentation) –view an address space as a linear array of bytes –divide it into pages of equal size (e.g., 4KB) –use a page table to map virtual pages to physical page frames page (logical) => page frame (physical) Segmentation –partition an address space into logical units stack, code, data, heap, subroutines, … –a virtual address is

11/12/ What’s the point? More “logical” –absent segmentation, a linker takes a bunch of independent modules that call each other and linearizes them –they are really independent; segmentation treats them as such Facilitates sharing and reuse –a segment is a natural unit of sharing – a subroutine or function A natural extension of variable-sized partitions –variable-sized partition = 1 segment/process –segmentation = many segments/process

11/12/ Hardware support Segment table –multiple base/limit pairs, one per segment –segments named by segment #, used as index into table a virtual address is –offset of virtual address added to base address of segment to yield physical address

11/12/ Segment lookups segment 0 segment 1 segment 2 segment 3 segment 4 physical memory segment # + virtual address <? raise protection fault no yes offset baselimit segment table

11/12/ Pros and cons Yes, it’s “logical” and it facilitates sharing and reuse But it has all the horror of a variable partition system –except that linking is simpler, and the “chunks” that must be allocated are smaller than a “typical” linear address space What to do?

11/12/ Combining segmentation and paging Can combine these techniques –x86 architecture supports both segments and paging Use segments to manage logical units –segments vary in size, but are typically large (multiple pages) Use pages to partition segments into fixed-size chunks –each segment has its own page table there is a page table per segment, rather than per user address space –memory allocation becomes easy once again no contiguous allocation, no external fragmentation Segment #Page #Offset within page Offset within segment

11/12/ Combining segmentation and paging page frame 0 page frame 1 page frame 2 page frame Y … page frame 3 physical memory offset physical address page frame # Page # virtual address Segment #Offset within Page page table page frame # Offset within segment Page TableLimit segment table

11/12/ Linux: –1 kernel code segment, 1 kernel data segment –1 user code segment, 1 user data segment –N task state segments (stores registers on context switch) –1 “local descriptor table” segment (not really used) –all of these segments are paged Note: this is a very limited/boring use of segments! WHY?

Administrivia Midterm –Blackboard online test, open book –Available on BB Thursday Feb 25, approx 10AM, due 11:59PM Monday Mar 1 –Covering Chapters 1-7 & 9 Quiz 3 Project 2 – me your project ideas NOW –Part 1: proposal, due this Friday –Part 2: start compiling –Part 3: finish building –Part 4: run, demo, write-up 11/12/201538