Operating Systems COMP 4850/CISG 5550 Page Tables TLBs Inverted Page Tables Dr. James Money.

Slides:



Advertisements
Similar presentations
Page Table Implementation
Advertisements

4/14/2017 Discussed Earlier segmentation - the process address space is divided into logical pieces called segments. The following are the example of types.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Kevin Walsh CS 3410, Spring 2010 Computer Science Cornell University Virtual Memory 2 P & H Chapter
CS 153 Design of Operating Systems Spring 2015
CS 333 Introduction to Operating Systems Class 11 – Virtual Memory (1)
Memory Management (II)
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Memory Management -3 CS 342 – Operating Systems Ibrahim Korpeoglu Bilkent.
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
CS 333 Introduction to Operating Systems Class 11 – Virtual Memory (1)
Chapter 3.2 : Virtual Memory
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
Memory Management 2 Tanenbaum Ch. 3 Silberschatz Ch. 8,9.
©UCB CS 161 Ch 7: Memory Hierarchy LECTURE 24 Instructor: L.N. Bhuyan
1 Tuesday, July 04, 2006 "Programs expand to fill the memory available to hold them." - Modified Parkinson’s Law.
CS 241 Section Week #12 (04/22/10).
Paging. Memory Partitioning Troubles Fragmentation Need for compaction/swapping A process size is limited by the available physical memory Dynamic growth.
Computer Architecture Lecture 28 Fasih ur Rehman.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 34 Paging Implementation.
IT253: Computer Organization
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
CS399 New Beginnings Jonathan Walpole. Virtual Memory (1)
Virtual Memory 1 1.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Virtual Memory Hardware.
4.3 Virtual Memory. Virtual memory  Want to run programs (code+stack+data) larger than available memory.  Overlays programmer divides program into pieces.
Demand Paging Reference Reference on UNIX memory management
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Paging Paging is a memory-management scheme that permits the physical-address space of a process to be noncontiguous. Paging avoids the considerable problem.
1  2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
Page Table Implementation. Readings r Silbershatz et al:
CHAPTER 3-3: PAGE MAPPING MEMORY MANAGEMENT. VIRTUAL MEMORY Key Idea Disassociate addresses referenced in a running process from addresses available in.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
3/1/2002CSE Virtual Memory Virtual Memory CPU On-chip cache Off-chip cache DRAM memory Disk memory Note: Some of the material in this lecture are.
CS203 – Advanced Computer Architecture Virtual Memory.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Operating Systems, Winter Semester 2011 Practical Session 9, Memory 1.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 33 Paging Read Ch. 9.4.
COMP 3500 Introduction to Operating Systems Paging: Basic Method Dr. Xiao Qin Auburn University Slides.
Memory: Page Table Structure CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
CS161 – Design and Architecture of Computer
Memory: Page Table Structure
Translation Lookaside Buffer
Memory Management Virtual Memory.
Non Contiguous Memory Allocation
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Lecture 12 Virtual Memory.
Lecture Topics: 11/19 Paging Page tables Memory protection, validation
Virtual Memory - Part II
Page Table Implementation
CS510 Operating System Foundations
Lecture 28: Virtual Memory-Address Translation
EECE.4810/EECE.5730 Operating Systems
Lecture 29: Virtual Memory-Address Translation
Virtual Memory Hardware
Translation Lookaside Buffer
Practical Session 9, Memory
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
© 2004 Ed Lazowska & Hank Levy
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
4.3 Virtual Memory.
Virtual Memory 1 1.
Presentation transcript:

Operating Systems COMP 4850/CISG 5550 Page Tables TLBs Inverted Page Tables Dr. James Money

Paging

Page Tables In the simplest cases, mapping of virtual addresses happens as we have described In the simplest cases, mapping of virtual addresses happens as we have described The virtual address is split into lower and higher order bits The virtual address is split into lower and higher order bits The higher order bits are grouped into a page number The higher order bits are grouped into a page number Splits might be of 3 or 5 bits instead of 4 Splits might be of 3 or 5 bits instead of 4

Page Tables The virtual page number is used as an index into the page table The virtual page number is used as an index into the page table From the entry in the page table, the page frame number is found From the entry in the page table, the page frame number is found The page frame number is attached to the high order bits to determine the physical address The page frame number is attached to the high order bits to determine the physical address

Page Tables The main purpose of the page table is to map virtual addresses to physical addresses The main purpose of the page table is to map virtual addresses to physical addresses Mathematically the page table is a function that takes a virtual page number and returns a physical frame number Mathematically the page table is a function that takes a virtual page number and returns a physical frame number

Page Tables There are two major issues to address There are two major issues to address –The page table can grow to be extremely large –The mapping of addresses must be fast

Page Tables The reason for large page tables is b/c most computers have at least 32 bits and many now have 64 The reason for large page tables is b/c most computers have at least 32 bits and many now have 64 With a 4KB page size, a 32 bit address space has 1 million pages With a 4KB page size, a 32 bit address space has 1 million pages With a 64 bit address space, there are more than one can imagine With a 64 bit address space, there are more than one can imagine Typically each process has its own page table with it’s own virtual address space Typically each process has its own page table with it’s own virtual address space

Page Tables The second point is needed b/c we do the virtual-to-physical mapping must be done with every memory reference The second point is needed b/c we do the virtual-to-physical mapping must be done with every memory reference Many times there are 1,2, or memory references per instruction Many times there are 1,2, or memory references per instruction If an instruction takes 4 nsec, the lookup must not exceed 1 nsec If an instruction takes 4 nsec, the lookup must not exceed 1 nsec

Page Tables The simplest design is to have the table as an array of fast hardware registers with an entry for each virtual page The simplest design is to have the table as an array of fast hardware registers with an entry for each virtual page This requires no memory references during mapping This requires no memory references during mapping This problem is this can be expensive($$!!) and slow with a context switch This problem is this can be expensive($$!!) and slow with a context switch

Page Tables The other end is to have the entire page table in memory The other end is to have the entire page table in memory There is a single register that points to start of the table There is a single register that points to start of the table Easy for a context switch Easy for a context switch Needs memory references to read page table Needs memory references to read page table

Multilevel Page Tables Many computers use a multilevel page system to alleviate the problem Many computers use a multilevel page system to alleviate the problem The basic idea is a two level tree with the top level being reference to leaf nodes with an array of page entries The basic idea is a two level tree with the top level being reference to leaf nodes with an array of page entries We partition the 32 bit virtual address into a PT1 field, a PT2 field, and the Offset field We partition the 32 bit virtual address into a PT1 field, a PT2 field, and the Offset field

Multilevel Page Tables

The idea is to not keep all the leaf page tables in memory when they are not needed The idea is to not keep all the leaf page tables in memory when they are not needed A page fault occurs when the top level entry has the Present bit is clear A page fault occurs when the top level entry has the Present bit is clear Either this is an illegal address or we need to allocate more pages to the process Either this is an illegal address or we need to allocate more pages to the process We can extend this scheme to three or four levels We can extend this scheme to three or four levels

Multilevel Page Tables For example, consider referencing the virtual address 0x For example, consider referencing the virtual address 0x This corresponds to PT1=1, PT2=2, and Offset=4 This corresponds to PT1=1, PT2=2, and Offset=4 The MMU uses PT1 as an index into the top level table and PT2 as the index into the appropriate second level table The MMU uses PT1 as an index into the top level table and PT2 as the index into the appropriate second level table

Page Table Entries We consider now the structure of a single entry for a page table We consider now the structure of a single entry for a page table This is highly machine dependent, but roughly the same from machine to machine This is highly machine dependent, but roughly the same from machine to machine A common size for an entry is 32 bits A common size for an entry is 32 bits

Page Table Entries The field contains The field contains –Page frame number –Present/absent bit –Protection – what kind of access is permitted –Modified/Referenced – keeps track of writes and read to a page frame –Caching disabled

Page Table Entries

Note the disk address to hold the page is not in memory or part of the page table Note the disk address to hold the page is not in memory or part of the page table This is b/c the OS handles this information internally in its software tables This is b/c the OS handles this information internally in its software tables The page table only has to hold information on virtual -> physical mappings for processes The page table only has to hold information on virtual -> physical mappings for processes

TLBs Most paging systems keep page tables in memory due to their large size Most paging systems keep page tables in memory due to their large size In register to register instructions, there is no paging hit In register to register instructions, there is no paging hit Since the rate of the memory access is the limiting factor, making two references reduces performance by 2/3 Since the rate of the memory access is the limiting factor, making two references reduces performance by 2/3

TLBs The solution is based on the fact there are a large number of close by memory references The solution is based on the fact there are a large number of close by memory references This results in a little number of page references This results in a little number of page references The rest are used rarely The rest are used rarely

TLBs The solution is to equip a device to handle the mapping without going through the page table The solution is to equip a device to handle the mapping without going through the page table This device is called the Translation Lookaside Buffer (TLB) or associate memory This device is called the Translation Lookaside Buffer (TLB) or associate memory This is kept inside the MMU and has a small number of entries(8-64) This is kept inside the MMU and has a small number of entries(8-64)

TLBs

TLBs How does this work? How does this work? The MMU first looks in the TLB by comparing all entries simultaneously The MMU first looks in the TLB by comparing all entries simultaneously If an entry is found and the protection is appropriate, then the entry is used without going to the page table If an entry is found and the protection is appropriate, then the entry is used without going to the page table If there is a protection problem, then a protection fault is issued If there is a protection problem, then a protection fault is issued

TLBs If the entry is not in the TLB, the MMU does a normal page lookup If the entry is not in the TLB, the MMU does a normal page lookup It replaces an entry in the TLB with the page lookup just issued It replaces an entry in the TLB with the page lookup just issued If that page is used again, then it will read the page frame from the TLB If that page is used again, then it will read the page frame from the TLB

Software TLB Management So far the management of the TLB has been done by the MMU itself So far the management of the TLB has been done by the MMU itself This was true in the past This was true in the past Many modern RISC CPUs do this management in software Many modern RISC CPUs do this management in software On these systems, a TLB fault is issued for handling the TLB miss On these systems, a TLB fault is issued for handling the TLB miss

Software TLB Management The OS needs to find the page table entry, evict the least used one, and load the new TLB entry The OS needs to find the page table entry, evict the least used one, and load the new TLB entry This has to be done fast since there are many TLB misses to a page fault This has to be done fast since there are many TLB misses to a page fault If the TLB size is large(64), then there is no problem b/c the misses are far in between If the TLB size is large(64), then there is no problem b/c the misses are far in between

Inverted Page Tables With 32 bit computers, with 4KB page size, a full table might require 4MB With 32 bit computers, with 4KB page size, a full table might require 4MB This can be a problem on 64 bit address systems This can be a problem on 64 bit address systems We now need 2 52 entries We now need 2 52 entries If each entry is 8 bytes, we need to use 30 million gigabytes If each entry is 8 bytes, we need to use 30 million gigabytes

Inverted Page Tables We need a different solution We need a different solution One solution is called an inverted page table One solution is called an inverted page table In this design, there is one entry per page frame in real memory rather than one entry per virtual page In this design, there is one entry per page frame in real memory rather than one entry per virtual page In the same scenario with 256MB of RAM, we need only entries In the same scenario with 256MB of RAM, we need only entries

Inverted Page Tables Each entry keeps track of which (process,virtual page) belongs to the frame Each entry keeps track of which (process,virtual page) belongs to the frame Now the virtual to physical mapping is hard to perform Now the virtual to physical mapping is hard to perform When a process references virtual page p, it must search the inverted page table for an entry of the form (n,p) When a process references virtual page p, it must search the inverted page table for an entry of the form (n,p)

Inverted Page Tables The way around this again it to use the TLB The way around this again it to use the TLB When the TLB holds the heavily used pages, then this work happens as fast as regular page tables When the TLB holds the heavily used pages, then this work happens as fast as regular page tables However, on a miss, the search must be done in software However, on a miss, the search must be done in software

Inverted Page Tables One way to do this search is to use a hash table with chaining One way to do this search is to use a hash table with chaining If the hash table has as many entries as frames, then the chains will only have one entry each If the hash table has as many entries as frames, then the chains will only have one entry each Once the entry is found, the entry is put in the TLB Once the entry is found, the entry is put in the TLB

Inverted Page Tables