Presentation is loading. Please wait.

Presentation is loading. Please wait.

CHAPTER 3-3: PAGE MAPPING MEMORY MANAGEMENT. VIRTUAL MEMORY Key Idea Disassociate addresses referenced in a running process from addresses available in.

Similar presentations


Presentation on theme: "CHAPTER 3-3: PAGE MAPPING MEMORY MANAGEMENT. VIRTUAL MEMORY Key Idea Disassociate addresses referenced in a running process from addresses available in."— Presentation transcript:

1 CHAPTER 3-3: PAGE MAPPING MEMORY MANAGEMENT

2 VIRTUAL MEMORY Key Idea Disassociate addresses referenced in a running process from addresses available in storage

3 FEATURES Can address a storage space larger than primary storage Creates the illusion that a process is placed contiguously in memory Two Methods Paging—memory allocated in fixed-size blocks Segmentation—memory allocated in different sizes

4 TERMS Virtual Addresses Addresses referenced by a running process Real Addresses Addresses available in primary storage Virtual Address Space Range of virtual address that a process may reference Real Address Space Range of real addresses available on a particular computer Implication: processes are referenced in virtual memory, but run in real memory

5 MAPPING 1.Virtual memory is contiguous 2.Physical memory need not be contiguous 3.Virtual memory can exceed physical memory 4.#3 Implies that only a part of a process has to be in physical memory 5.Virtual memory is limited by address size 6.Physical memory has (what else?) physical limits virtual memory physical memory

6 ISSUES How are addresses mapped? Page Fault: Virtual memory reference has no corresponding item in physical memory Physical memory is full, but a process needs something in secondary storage

7 NL-MAP/MMU A FIRST LOOK Suppose every virtual address is mapped to a real address Far beyond base/limit registers Problem Page Map Table is as large as the process Solution Break process into fixed-size blocks, say 512 bytes to 8kb Map the blocks in hardware Virtual Memory Blocks: Pages Real memory Blocks: Page Frames

8 PAGE TABLE Page: Chunk in Virtual Memory Page Frame: Chunk in Physical Memory Relation between virtual addresses and physical memory addresses is given by the page table Every page begins on a multiple of 4096 and ends 4095 addresses higher So 4K–8K really means 4096–8191 8K to 12K means 8192–12287

9 A LITTLE HELP FROM HARDWARE The position and function of the MMU MMU is shown as being a part of the CPU chip Could just as easily be a separate chip

10 NL-MAP (1) NL-MAP is really a look-up into a page table Each process has its own page table Register in CPU is loaded with the real address, A, of the process page table Page table contains 1 entry for each page of the process

11 NL-MAP (2) Virtual address has two parts: (p,d) page number (p) offset from the start of the page table (d) Real address in page table has (at least) five parts present/absent bit protection bits (rwe) referenced bit: if dirty when evicted must be written to disk secondary storage address page frame address

12 NL-MAP (3) Consequences 1.Page table can be large 2.Mapping must be fast ref: page has been referenced mod: page has been modified prot: what kinds of access permitted pres/abs: if set to 0, page fault

13 SPEED Direct mapped page table is kept in main memory. Two memory accesses are required to satisfy a reference 1.Page table 2.Primary storage

14 SIZE Suppose we have 32 bit addresses Some of the 32 bits will be the offset (displacement) within the page table Some of the 32 bits will be the offset (displacement) within the page frame Suppose our pages are 4K (4096 bytes) Offsets from 0 – 4K-1 are necessary These are displacements from the start of the page frame, the ‘d’ part of the real address Since 4k = 2 12 we reserve the low order 12 bits for this Leaves 20 bits to address entries in the page table. Each process would have a page table with 2 20 entries. If the page size is smaller, we have more entries!!

15 IMPLICATIONS OF THE DIRECT MAPPING MODEL Large page tables are impractical from two perspectives Require lots of memory Loading an entire page table at each context switch would kill performance

16 MORE HELP FROM HARDWARE Translation lookaside buffer (TLB) Associative memory Searchable in parallel Very small number of entries, say 8 to 256

17 TLB

18 TLB CLOSE-UP

19 PRINCIPLE OF LOCALITY TLB with only 16 entries achieves 90% of the performance of a system with the entire page table in associative memory Why? A page referenced in the recent past is likely to be referenced again

20 PAGING WITH TLB (1) Small associative memory with 16 or 32 registers Store there the most recently referenced page frames Technique 1. Process references a virtual address (p,d) 2. Do a parallel search of TLB for p if found, return p’, the frame number corresponding to p else look up p in the page table Update tlb with p and p’, replacing the least recently used entry

21 PAGING WITH TLB (2) When P is not found in TLB, the least recently used slot in TLB is filled with P from page table

22 The Whole Picture

23 PAGE TABLES CAN BE LARGE Suppose: 32 bit machine, 4K page size, 12 bit displacement 2 20 page table entries, each (at least) 8 bytes 8MB page table per process Now Suppose 64 bit machine, 4K page size, 12 bit displacement 2 52 page table entries, each (at least) 8 bytes (2 55 bytes)/(2 40 bytes/TB) = 2 15 TB page table per process

24 INDIRECTION TWO LEVELS (THERE COULD BE MORE) Virtual address = p, t, d p: page number at first level t: page number at second level d: displacement into page frame

25 THE TWO LEVEL MODEL p’... p’’... 1 st level PT Address (A) ptd p’’d + + Level 1Level 2... 1 table for each entry in level 1

26 HOW MANY ACTUAL TABLES? Suppose 32 bit addresses, 4K page size P (10 bit displacement into first level table) T (10 bit displacement into second level table) D (12 bit displacement into page frame) 1.P has 2 10 entries 2.T has 2 10 entries Each entry in the top level table references 4MB of memory because It references a second level page table of 1024 or 2 10 entries Each of which points to a 4K page At most: 1 top level table with 1024 entries At most: 1024 second level tables, each pointing to a 4k page At most: 1024 * 1024 * 4K pages = 2 32 byte process

27 KEY IDEA Not all of the tables are necessary With direct mapping Each process requires a page table with up to 2 20 entries whether they are needed or not With a two-level page table the savings can be substantial

28 EXAMPLE Suppose a 12 MB process bottom 4 MB for code next 4 MB for data top 4 MB for stack Hole between the top of the data and bottom of the stack Top level page table has 2 10 slots, 2 3 bytes each Only three are used These point to three second level page tables, each with 2 10 slots, each slot requiring 2 3 bytes Total Page Table Memory = 2 10 slots x 2 3 bytes / slot + 3 x 2 10 slots x 2 3 bytes / slot = 2 15 bytes = 512 K for all page tables Direct Mapping: 8MB page table

29 MULTILEVEL PAGE TABLES Each arrow on the right points to a 4K page. The low 12 bits of the virtual address are a displacement into the page

30 SAMPLE PAGE REFERENCE MMU receives this virtual address: 0x00403004 (hex) 0000|0000|0100|0000|0011|0000|0000|0100 P=1T = 3 D = 4 P=1: 1 st entry, starting at 0, in the first level page table. Find p’, the address of the 2 nd level page table T=3: 3 rd entry, starting at 0, in 2 nd level page table whose address is p’. Find p’’, the address of the page frame D = 4: 4 th byte offset within the page frame

31 WHERE IN VIRTUAL MEMORY? 1.P indexes top level page table. P = 1 corresponds to 2 nd 4M of virt mem: 4M to 8M-1 2.T indexes 2 nd level page table. T = 3 corresponds to the 4 th 4K 3 * 4K to 4 * 4K-1 12K to 16K-1 12,288 to 16,383 within its 4M chunk But since P = 1, we are in the 2 nd 4m chunk 12,288 + 4M to 16,383 + 4M or absolute addresses 4,206,592 – 4,210,687 3.Entry found in the 2 nd level page table contains frame address corresponding to the virtual address: 0x00403000 4.To this we add the d = 4 to get virtual address 0x004003004 which translates to absolute address 4206592 + 4 = 4206596 within the virtual address space 5.If present/absent bit is 0: page fault Note: a) Virtual address space with (2 32 bytes)/ (2 12 bytes/page) = 2 20 pages b) But we are using only 12M/(4K/page) = 3 K pages c) Only 4 page tables are necessary to handle all of the address references

32 MORE LEVELS ARE POSSIBLE If we refer to the 1 st level page table, as the page directory, then the scheme we have been describing is Intel’s 80386 from 1995 Pentium Pro in 2005 added another level Page Directory Pointer Table But with 4 entries, 512 entry page tables, 4k page frames 2 2 x 2 9 x 2 9 x 2 12 = 4GB (as before, but with more flexibility) x86 uses 4 levels of 512 entry page tables 2 9 x 2 9 x 2 9 x 2 9 x 2 12 = 2 48 = 256 TB

33 GETTING OUT OF HAND? INVERTED PAGE TABLES Page table contains 1 entry per page frame of real memory Suppose we want to address 1GB of real memory using 4K page frames Inverted page table requires 2 18 entries because 2 18 x 2 12 = 1GB Seems big---But this is for all processes So, process n refers to virtual page p, p is no longer an index into the table Entire 256K table must be searched for page frame (n,p)

34 TLB AND HASH TABLES On TLB miss, entire inverted table has to be searched Hash Solution Search TLB If found, retrieve page frame if not found Hash page number to find entry in hash table All pages hashing to the same number are chained as tuples (page, page frame) Enter entry in TLB Retrieve page frame

35 INVERTED PAGE TABLES a)Direct mapping b) Inverted page table c) Inverted page table with hash chains


Download ppt "CHAPTER 3-3: PAGE MAPPING MEMORY MANAGEMENT. VIRTUAL MEMORY Key Idea Disassociate addresses referenced in a running process from addresses available in."

Similar presentations


Ads by Google