Presentation is loading. Please wait.

Presentation is loading. Please wait.

Virtual Memory (Review) Programs refer to virtual memory addresses  movl (%ecx),%eax  Conceptually very large array of bytes  Each byte has its own.

Similar presentations


Presentation on theme: "Virtual Memory (Review) Programs refer to virtual memory addresses  movl (%ecx),%eax  Conceptually very large array of bytes  Each byte has its own."— Presentation transcript:

1 Virtual Memory (Review) Programs refer to virtual memory addresses  movl (%ecx),%eax  Conceptually very large array of bytes  Each byte has its own address  Actually implemented with hierarchy of different memory types  System provides address space private to particular “process” Allocation: Compiler and run-time system  Where in single virtual address space each program object is to be stored But why virtual and not physical memory? 00∙∙∙∙∙∙0 FF∙∙∙∙∙∙F

2 Problem 1: How Does Everything Fit? 64-bit addresses: 16 Exabyte Physical main memory: Few Gigabytes ? And there are many processes ….

3 Problem 2: Memory Management Physical main memory What goes where? stack heap.text.data … Process 1 Process 2 Process 3 … Process n x

4 Problem 3: How To Protect Physical main memory Process i Process j Problem 4: How To Share? Physical main memory Process i Process j

5 Solution: Level Of Indirection Each process gets its own private memory space Solves the previous problems Physical memory Virtual memory Process 1 Process n mapping

6 Address Spaces Linear address space: Ordered set of contiguous non-negative integer addresses: {0, 1, 2, 3, … } Virtual address space: Set of N = 2 n virtual addresses {0, 1, 2, 3, …, N-1} Physical address space: Set of M = 2 m physical addresses {0, 1, 2, 3, …, M-1} Clean distinction between data (bytes) and their attributes (addresses) Each object can now have multiple addresses Every byte in main memory:  One physical address  One (or more) virtual addresses

7 A System Using Physical Addressing Used in “simple” systems with embedded microcontrollers  In devices such as like cars, elevators, digital picture frames,... 0: 1: M-1: Main memory CPU 2: 3: 4: 5: 6: 7: Physical address (PA) Data word 8:...

8 A System Using Virtual Addressing Used in all modern desktops, laptops, workstations One of the great ideas in computer science MMU checks the cache 0: 1: M-1: Main memory MMU 2: 3: 4: 5: 6: 7: Physical address (PA) Data word 8:... CPU Virtual address (VA) CPU Chip

9 Why Virtual Addressing? Simplifies memory management for programmers  Each process gets an identical, full, private, linear address space Isolates address spaces  One process can’t interfere with another’s memory  Because they operate in different address spaces  User process cannot access privileged information  Different sections of address spaces have different permissions

10 Why Virtual Memory? Efficient use of limited main memory (RAM)  Use RAM as a cache for the parts of a virtual address space  Some non-cached parts stored on disk  Some (unallocated) non-cached parts stored nowhere  Keep only active areas of virtual address space in memory  Transfer data back and forth as needed

11 Disk VM as a Tool for Caching Virtual memory: array of N = 2 n contiguous bytes  Think of the array (allocated part) as being stored on disk Physical main memory (DRAM) = cache for allocated virtual memory Blocks are called pages; size = 2 p PP 2 m-p -1 Physical memory Empty Uncached VP 0 VP 1 VP 2 n-p -1 Virtual memory Unallocated Cached Uncached Unallocated Cached Uncached PP 0 PP 1 Empty Cached 0 2 n -1 2 m -1 0 Virtual pages (VP's) stored on disk Physical pages (PP's) cached in DRAM

12 Memory Hierarchy: Core 2 Duo Disk Main Memory L2 unified cache L1 I-cache L1 D-cache CPUReg 2 B/cycle8 B/cycle16 B/cycle1 B/30 cyclesThroughput: Latency:100 cycles14 cycles3 cyclesmillions ~4 MB 32 KB ~4 GB~500 GB Not drawn to scale L1/L2 cache: 64 B blocks Miss penalty (latency): 30x Miss penalty (latency): 10,000x

13 DRAM Cache Organization DRAM cache organization driven by the enormous miss penalty  DRAM is about 10x slower than SRAM  Disk is about 10,000x slower than DRAM  For first byte, faster for next byte Consequences  Large page (block) size: typically 4-8 KB, sometimes 4 MB  Fully associative  Any VP can be placed in any PP  Requires a “large” mapping function – different from CPU caches  Highly sophisticated, expensive replacement algorithms  Too complicated and open-ended to be implemented in hardware  Write-back rather than write-through

14 Address Translation: Page Tables A page table is an array of page table entries (PTEs) that maps virtual pages to physical pages. Here: 8 VPs  Per-process kernel data structure in DRAM null Memory resident page table (DRAM) Physical memory (DRAM) VP 7 VP 4 Virtual memory (disk) Valid 0 1 0 1 0 1 0 1 Physical page number or disk address PTE 0 PTE 7 PP 0 VP 2 VP 1 PP 3 VP 1 VP 2 VP 4 VP 6 VP 7 VP 3

15 Address Translation With a Page Table Virtual page number (VPN)Virtual page offset (VPO) Physical page number (PPN) Physical page offset (PPO) Virtual address Physical address Valid Physical page number (PPN) Page table base register (PTBR) Page table Page table address for process Valid bit = 0: page not in memory (page fault)

16 Page Hit Page hit: reference to VM word that is in physical memory null Memory resident page table (DRAM) Physical memory (DRAM) VP 7 VP 4 Virtual memory (disk) Valid 0 1 0 1 0 1 0 1 Physical page number or disk address PTE 0 PTE 7 PP 0 VP 2 VP 1 PP 3 VP 1 VP 2 VP 4 VP 6 VP 7 VP 3 Virtual address

17 Page Miss Page miss: reference to VM word that is not in physical memory null Memory resident page table (DRAM) Physical memory (DRAM) VP 7 VP 4 Virtual memory (disk) Valid 0 1 0 1 0 1 0 1 Physical page number or disk address PTE 0 PTE 7 PP 0 VP 2 VP 1 PP 3 VP 1 VP 2 VP 4 VP 6 VP 7 VP 3 Virtual address

18 Handling Page Fault Page miss causes page fault (an exception) null Memory resident page table (DRAM) Physical memory (DRAM) VP 7 VP 4 Virtual memory (disk) Valid 0 1 0 1 0 1 0 1 Physical page number or disk address PTE 0 PTE 7 PP 0 VP 2 VP 1 PP 3 VP 1 VP 2 VP 4 VP 6 VP 7 VP 3 Virtual address

19 Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) null Memory resident page table (DRAM) Physical memory (DRAM) VP 7 VP 4 Virtual memory (disk) Valid 0 1 0 1 0 1 0 1 Physical page number or disk address PTE 0 PTE 7 PP 0 VP 2 VP 1 PP 3 VP 1 VP 2 VP 4 VP 6 VP 7 VP 3 Virtual address

20 Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) null Memory resident page table (DRAM) Physical memory (DRAM) VP 7 VP 3 Virtual memory (disk) Valid 0 1 1 0 0 1 0 1 Physical page number or disk address PTE 0 PTE 7 PP 0 VP 2 VP 1 PP 3 VP 1 VP 2 VP 4 VP 6 VP 7 VP 3 Virtual address

21 Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Offending instruction is restarted: page hit! null Memory resident page table (DRAM) Physical memory (DRAM) VP 7 VP 3 Virtual memory (disk) Valid 0 1 1 0 0 1 0 1 Physical page number or disk address PTE 0 PTE 7 PP 0 VP 2 VP 1 PP 3 VP 1 VP 2 VP 4 VP 6 VP 7 VP 3 Virtual address

22 Why does it work? Locality Virtual memory works because of locality At any point in time, programs tend to access a set of active virtual pages called the working set  Programs with better temporal locality will have smaller working sets If (working set size < main memory size)  Good performance for one process after compulsory misses If ( SUM(working set sizes) > main memory size )  Thrashing: Performance meltdown where pages are swapped (copied) in and out continuously

23 VM as a Tool for Memory Management Key idea: each process has its own virtual address space  It can view memory as a simple linear array  Mapping function scatters addresses through physical memory  Well chosen mappings simplify memory allocation and management Virtual Address Space for Process 1: Physical Address Space (DRAM) 0 N-1 (e.g., read-only library code) Virtual Address Space for Process 2: VP 1 VP 2... 0 N-1 VP 1 VP 2... PP 2 PP 6 PP 8... 0 M-1 Address translation

24 VM as a Tool for Memory Management Memory allocation  Each virtual page can be mapped to any physical page  A virtual page can be stored in different physical pages at different times Sharing code and data among processes  Map virtual pages to the same physical page (here: PP 6) Virtual Address Space for Process 1: Physical Address Space (DRAM) 0 N-1 (e.g., read-only library code) Virtual Address Space for Process 2: VP 1 VP 2... 0 N-1 VP 1 VP 2... PP 2 PP 6 PP 8... 0 M-1 Address translation

25 Simplifying Linking and Loading Linking  Each program has similar virtual address space  Code, stack, and shared libraries always start at the same address Loading  execve() allocates virtual pages for.text and.data sections = creates PTEs marked as invalid  The.text and.data sections are copied, page by page, on demand by the virtual memory system Kernel virtual memory Memory-mapped region for shared libraries Run-time heap (created by malloc ) User stack (created at runtime) Unused 0 %esp (stack pointer) Memory invisible to user code brk 0xc0000000 0x08048000 0x40000000 Read/write segment (. data,. bss ) Read-only segment (.init,. text,.rodata ) Loaded from the executable file

26 VM as a Tool for Memory Protection Extend PTEs with permission bits Page fault handler checks these before remapping  If violated, send process SIGSEGV (segmentation fault) Process i: AddressREADWRITE PP 6YesNo PP 4Yes PP 2Yes VP 0: VP 1: VP 2: Process j: Yes SUP No Yes AddressREADWRITE PP 9YesNo PP 6Yes PP 11Yes SUP No Yes No VP 0: VP 1: VP 2: Physical Address Space PP 2 PP 4 PP 6 PP 8 PP 9 PP 11

27 Address Translation: Page Hit 1) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) MMU sends physical address to cache/memory 5) Cache/memory sends data word to processor MMU Cache/ Memory PA Data CPU VA CPU Chip PTEA PTE 1 2 3 4 5

28 Address Translation: Page Fault 1) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) Valid bit is zero, so MMU triggers page fault exception 5) Handler identifies victim (and, if dirty, pages it out to disk) 6) Handler pages in new page and updates PTE in memory 7) Handler returns to original process, restarting faulting instruction MMU Cache/ Memory CPU VA CPU Chip PTEA PTE 1 2 3 4 5 Disk Page fault handler Victim page New page Exception 6 7

29 Speeding up Translation with a TLB Page table entries (PTEs) are cached in L1 like any other memory word  PTEs may be evicted by other data references  PTE hit still requires a 1-cycle delay Solution: Translation Lookaside Buffer (TLB)  Small hardware cache in MMU  Maps virtual page numbers to physical page numbers  Contains complete page table entries for small number of pages

30 TLB Hit MMU Cache/ Memory PA Data CPU VA CPU Chip PTE 1 2 4 5 A TLB hit eliminates a memory access TLB VPN 3

31 TLB Miss MMU Cache/ Memory PA Data CPU VA CPU Chip PTE 1 2 5 6 TLB VPN 4 PTEA 3 A TLB miss incurs an add’l memory access (the PTE) Fortunately, TLB misses are rare (WHY?)

32 From virtual address to memory location CPU Virtual address TLB Physical address hit cache Main memory (page table) miss hit miss Translation Lookaside Buffer (TLB) is a special cache just for the page table. Usually fully associative.

33 Translation Lookaside Buffer Virtual to Physical translations are cached in a TLB.

34 What Happens on a Context Switch? Page table is per process So is TLB TLB flush TLB tagging

35 Review of Abbreviations Components of the virtual address (VA)  TLBI: TLB index  TLBT: TLB tag  VPO: virtual page offset  VPN: virtual page number Components of the physical address (PA)  PPO: physical page offset (same as VPO)  PPN: physical page number  CO: byte offset within cache line  CI: cache index  CT: cache tag

36 Simple Memory System Example Addressing  14-bit virtual addresses  12-bit physical address  Page size = 64 bytes 131211109876543210 11109876543210 VPO PPOPPN VPN Virtual Page Number Virtual Page Offset Physical Page Number Physical Page Offset

37 Simple Memory System Page Table Only show first 16 entries (out of 256) 10D0F 1110E 12D0D 0–0C 0–0B 1090A 11709 11308 ValidPPNVPN 0–07 0–06 11605 0–04 10203 13302 0–01 12800 ValidPPNVPN

38 Simple Memory System TLB 16 entries 4-way associative 131211109876543210 VPO VPN TLBI TLBT 0–021340A10D030–073 0–030–060–080–022 0–0A0–040–0212D031 102070–0010D090–030 ValidPPNTagValidPPNTagValidPPNTagValidPPNTagSet

39 Simple Memory System Cache 16 lines, 4-byte block size Physically addressed Direct mapped 11109876543210 PPOPPN CO CI CT 03DFC2111167 ––––0316 1DF0723610D5 098F6D431324 ––––0363 0804020011B2 ––––0151 112311991190 B3B2B1B0ValidTagIdx ––––014F D31B7783113E 15349604116D ––––012C ––––00BB 3BDA159312DA ––––0 9 8951003A1248 B3B2B1B0ValidTagIdx

40 Address Translation Example #1 Virtual Address: 0x03D4 VPN ___TLBI ___TLBT ____ TLB Hit? __Page Fault? __ PPN: ____ Physical Address CO ___CI___CT ____ Hit? __ Byte: ____ 131211109876543210 0 01010 11110000 11109876543210

41 Address Translation Example #2 Virtual Address: 0x0020 VPN ___TLBI ___TLBT ____ TLB Hit? __Page Fault? __ PPN: ____ Physical Address CO ___CI___CT ____ Hit? __ Byte: ____ 11109876543210 131211109876543210 0 00001 00000000

42 Address Translation Example #1 Virtual Address: 0x03D4 VPN ___TLBI ___TLBT ____ TLB Hit? __Page Fault? __ PPN: ____ Physical Address CO ___CI___CT ____ Hit? __ Byte: ____ 131211109876543210 VPO VPN TLBI TLBT 11109876543210 PPO PPN CO CI CT 0 01010 11110000 0x0F30x03 Y N0x0D 0 00101011010 00x50x0DY0x36

43 Address Translation Example #2 Virtual Address: 0x0020 VPN ___TLBI ___TLBT ____ TLB Hit? __Page Fault? __ PPN: ____ Physical Address CO___CI___CT ____ Hit? __ Byte: ____ 131211109876543210 VPO VPN TLBI TLBT 11109876543210 PPO PPN CO CI CT 0 00001 00000000 0x000 N N0x28 0 00000000111 00x80x28NMem

44 Address Translation Example #3 Virtual Address: 0x0B8F VPN ___TLBI ___TLBT ____ TLB Hit? __Page Fault? __ PPN: ____ Physical Address CO ___CI___CT ____ Hit? __ Byte: ____ 131211109876543210 VPO VPN TLBI TLBT 11109876543210 PPO PPN CO CI CT 1 11100 01110100 0x2E20x0B N YTBD

45 Summary Programmer’s view of virtual address space  Each process has its own private linear address space  Cannot be corrupted by other processes System view of VAS & virtual memory  Uses memory efficiently by caching virtual memory pages  Efficient only because of locality  Simplifies memory management and programming  Simplifies protection by providing a convenient interpositioning point to check permissions

46 Allocating Virtual Pages Example: Allocating VP5 null Memory resident page table (DRAM) Physical memory (DRAM) VP 7 VP 3 Virtual memory (disk) Valid 0 1 1 0 0 1 0 1 Physical page number or disk address PTE 0 PTE 7 PP 0 VP 2 VP 1 PP 3 VP 1 VP 2 VP 4 VP 6 VP 7 VP 3

47 Allocating Virtual Pages Example: Allocating VP 5 Kernel allocates VP 5 on disk and points PTE 5 to it null Memory resident page table (DRAM) Physical memory (DRAM) VP 7 VP 3 Virtual memory (disk) Valid 0 1 1 0 0 1 0 1 Physical page number or disk address PTE 0 PTE 7 PP 0 VP 2 VP 1 PP 3 VP 1 VP 2 VP 4 VP 6 VP 7 VP 3 VP 5

48 Page Tables Size Given:  4KB (2 12 ) page size  48-bit address space  4-byte PTE How big is the page table?

49 Multi-Level Page Tables Given:  4KB (2 12 ) page size  48-bit address space  4-byte PTE Problem:  Would need a 256 GB page table!  2 48 * 2 -12 * 2 2 = 2 38 bytes Common solution  Multi-level page tables  Example: 2-level page table  Level 1 table: each PTE points to a page table  Level 2 table: each PTE points to a page (paged in and out like other data)  Level 1 table stays in memory  Level 2 tables paged in and out Level 1 Table... Level 2 Tables...

50 A Two-Level Page Table Hierarchy Level 1 page table... Level 2 page tables VP 0... VP 1023 VP 1024... VP 2047 Gap 0 PTE 0... PTE 1023 PTE 0... PTE 1023 1023 null PTEs PTE 1023 1023 unallocated pages VP 9215 Virtual address space (1K - 9) null PTEs PTE 0 PTE 1 PTE 2 (null) PTE 3 (null) PTE 4 (null) PTE 5 (null) PTE 6 (null) PTE 7 (null) PTE 8 2K allocated VM pages for code and data 6K unallocated VM pages 1023 unallocated pages 1 allocated VM page for the stack

51 Translating with a k-level Page Table VPN 1 0p-1n-1 VPOVPN 2...VPN k PPN 0p-1m-1 PPOPPN Virtual Address Physical Address... Level 1 page table Level 2 page table Level k page table

52 x86-64 Paging Origin  AMD’s way of extending x86 to 64-bit instruction set  Intel has followed with “EM64T” Requirements  48-bit virtual address  256 terabytes (TB)  Not yet ready for full 64 bits –Nobody can buy that much DRAM yet –Mapping tables would be huge  52-bit physical address = 40 bits for PPN  Requires 64-bit table entries  Keep traditional x86 4KB page size, and same size for page tables  (4096 bytes per PT) / (8 bytes per PTE) = only 512 entries per page

53 L1 d-cache 32 KB, 8-way L2 unified cache 256 KB, 8-way L3 unified cache 8 MB, 16-way (shared by all cores) Main memory Registers L1 d-TLB 64 entries, 4-way L1 i-TLB 128 entries, 4-way L2 unified TLB 512 entries, 4-way L1 i-cache 32 KB, 8-way MMU (addr translation) Instruction fetch Core x4 DDR3 Memory controller 3 x 64 bit @ 10.66 GB/s 32 GB/s total (shared by all cores) Processor package QuickPath interconnect 4 links @ 25.6 GB/s 102.4 GB/s total To other cores To I/O bridge Intel Core i7

54 High end of Intel “core” brand, 731M transistors, 1366 pins. Quadcore Core i7 announced late 2008, six-core addition was launched in March 2010 How many caches (including TLBs) are on this chip?

55 Review of Abbreviations Components of the virtual address (VA)  TLBI: TLB index  TLBT: TLB tag  VPO: virtual page offset  VPN: virtual page number Components of the physical address (PA)  PPO: physical page offset (same as VPO)  PPN: physical page number  CO: byte offset within cache line  CI: cache index  CT: cache tag

56 Overview of Core i7 Address Translation CPU VPNVPO 3612 TLBTTLBI 432 Virtual address (VA)... L1 TLB (16 sets, 4 entries/set) VPN1VPN2 99 PTE CR3 PPNPPO 404012 Page tables TLB miss TLB hit Physical address (PA) result 32/64... CTCO 406 CI 6 L2, L3 and main memory L1 d-cache (64 sets, 8 lines/set) L1 hit L1 miss PTE VPN3VPN4 PTE 99

57 TLB Translation 1.Partition VPN into TLBT and TLBI. 2.Is the PTE for VPN cached in set TLBI? 3.Yes: Check permissions, build physical address 4.No: Read PTE (and others as necessary) from memory and build physical address CPU VPNVPO 3612 TLBTTLBI 432 virtual address PTE... TLB miss TLB hit page table translation PPNPPO 404012 physical address 1 2 3 4 partial TLB hit

58 TLB Miss: Page Table Translation L1 PTE CR3 Page global directory VPN1 9 VPO 12 Virtual address PPNPPO 40 12 Physical address VPN2VPN3VPN4 999 L2 PTE Page upper directory L3 PTE Page middle directory L4 PTE Page table

59 BONUS SLIDES

60 Page table physical base addrUnusedGPSACDWTU/SR/WP=1 5112119876543210 Unused 52 XD 6362 Available for OS (page table location on disk)P=0 Page physical base addressUnusedG0DACDWTU/SR/WP=1 5112119876543210 Available for OS (page location on disk)P=0 Unused 52 XD 6362 PTE Formats Level 1-3 PTE Level 4 PTE P: Page table is present in memory R/W:read-only or read+write U/S:user or supervisor mode access WT:write-through or write-back cache policy for this page table CD:cache disabled or enabled A:accessed (set by MMU on reads and writes, cleared by OS) D:dirty (set by MMU on writes, cleared by OS) PS: page size 4K (0) or 4MB (1), For level 1 PTE only G:global page (don’t evict from TLB on task switch) Page table physical base address: 40 most significant bits of physical page table address XD:disable or enable instruction fetches from this page

61 L1 Cache Access Partition physical address: CO, CI, and CT Use CT to determine if line containing word at address PA is cached in set CI No: check L2 Yes: extract word at byte offset CO and return to processor physical address (PA) data 32/64... CTCO 406 CI 6 L2, L3 and main memory L1 d-cache (64 sets, 8 lines/set) L1 hit L1 miss

62 Speeding Up L1 Access: A “Neat Trick” Observation  Bits that determine CI identical in virtual and physical address  Can index into cache while address translation taking place  Generally we hit in TLB, so PPN bits (CT bits) available quickly  “Virtually indexed, physically tagged”  Cache carefully sized to make this possible Physical address (PA) CTCO 40406 CI 6 Virtual address (VA) VPNVPO 3612 PPOPPN Address Translation No Change CI Tag Check

63 vm_next Linux VM “Areas” task_struct mm_struct pgdmm mmap vm_area_struct vm_end vm_prot vm_start vm_end vm_prot vm_start vm_end vm_prot vm_next vm_start process virtual memory text data shared libraries 0 0x08048000 0x0804a020 0x40000000 pgd:  Address of level 1 page table vm_prot:  Read/write permissions for all pages in this area vm_flags  Shared/private status of all pages in this area vm_flags

64 Linux Page Fault Handling Is the VA legal?  i.e., Is it in area defined by a vm_area_struct?  If not (#1), then signal segmentation violation Is the operation legal?  i.e., Can the process read/write this area?  If not (#2), then signal protection violation Otherwise  Valid address (#3): handle fault write read 1 2 3 vm_next vm_area_struct vm_end vm_prot vm_start vm_end vm_prot vm_start vm_end vm_prot vm_next vm_start process virtual memory text data shared libraries vm_flags

65 Memory Mapping Creation of new VM area done via “memory mapping”  Create new vm_area_struct and page tables for area Area can be backed by (i.e., get its initial values from) :  Regular file on disk (e.g., an executable object file)  Initial page bytes come from a section of a file  Nothing (e.g.,.bss) aka “anonymous file”  First fault will allocate a physical page full of 0's (demand-zero)  Once the page is written to (dirtied), it is like any other page Dirty pages are swapped back and forth between a special swap file. Key point: no virtual pages are copied into physical memory until they are referenced!  Known as “demand paging”  Crucial for time and space efficiency

66 User-Level Memory Mapping void *mmap(void *start, int len, int prot, int flags, int fd, int offset ) len bytes start (or address chosen by kernel) Process virtual memory Disk file specified by file descriptor fd len bytes offset (bytes)

67 User-Level Memory Mapping void *mmap(void *start, int len, int prot, int flags, int fd, int offset ) Map len bytes starting at offset offset of the file specified by file description fd, preferably at address start  start: may be 0 for “pick an address”  prot : PROT_READ, PROT_WRITE,...  flags : MAP_PRIVATE, MAP_SHARED,... Return a pointer to start of mapped area (may not be start ) Example: fast file-copy  Useful for applications like Web servers that need to quickly copy files.  mmap() allows file transfers without copying into user space.

68 mmap () Example: Fast File Copy #include /* * a program that uses mmap to copy * the file input.txt to stdout */ int main() { struct stat stat; int i, fd, size; char *bufp; /* open the file & get its size */ fd = open("./input.txt", O_RDONLY); fstat(fd, &stat); size = stat.st_size; /* map the file to a new VM area */ bufp = mmap(0, size, PROT_READ, MAP_PRIVATE, fd, 0); /* write the VM area to stdout */ write(1, bufp, size); exit(0); }

69 Exec() Revisited kernel code/data/stack Memory mapped region for shared libraries runtime heap (via malloc) program text (.text) initialized data (.data) uninitialized data (.bss) stack forbidden 0 %esp process VM brk 0xc0… physical memory same for each process process-specific data structures (page tables, task and mm structs) kernel VM To run a new program p in the current process using exec() : Free vm_area_struct’s and page tables for old areas Create new vm_area_struct’s and page tables for new areas  Stack, BSS, data, text, shared libs.  Text and data backed by ELF executable object file  BSS and stack initialized to zero Set PC to entry point in.text  Linux will fault in code, data pages as needed.data.text p demand-zero libc.so.data.text

70 Fork() Revisited To create a new process using fork() :  Make copies of the old process’s mm_struct, vm_area_struct’s, and page tables.  At this point the two processes share all of their pages.  How to get separate spaces without copying all the virtual pages from one space to another? –“Copy on Write” (COW) technique.  Copy-on-write  Mark PTE's of writeable areas as read-only  Writes by either process to these pages will cause page faults  Flag vm_area_struct’s for these areas as private “copy-on-write” –Fault handler recognizes copy-on-write, makes a copy of the page, and restores write permissions. Net result:  Copies are deferred until absolutely necessary (i.e., when one of the processes tries to modify a shared page).

71 Discussion How does the kernel manage stack growth? How does the kernel manage heap growth? How does the kernel manage dynamic libraries? How can multiple user processes share writable data? How can mmap be used to access file contents in arbitrary (non-sequential) order?

72 9.9: Dynamic Memory Allocation Motivation  Size of data structures may be known only at runtime Essentials  Heap: demand-zero memory immediately after bss area, grows upward  Allocator manages heap as collection of variable sized blocks. Two styles of allocators:  Explicit: allocation and freeing both explicit  C ( malloc and free ), C++ ( new and free )  Implicit: allocation explicit, freeing implicit  Java, Lisp, ML  Garbage collection: automatically freeing unused blocks  Tradeoffs: ease of (correct) use, runtime overhead

73 Heap Management kernel virtual memory run-time heap (via malloc ) program text (. text ) initialized data (. data ) uninitialized data (. bss ) stack 0 %esp memory protected from user code the “ brk ” ptr Allocators request additional heap memory from the kernel using the sbrk() function: error = sbrk(amt_more)

74 Heap Management Classic CS problem  Handle arbitrary request sequence  Respond immediately to allocation requests  Meet alignment requirements  Avoid modifying allocated blocks  Maximize throughput and memory utilization  Avoid fragmentation Specific issues to consider  How are free blocks tracked?  Which free block to pick for next allocation?  What to do with remainder of free block when part allocated?  How to coalesce freed blocks?

75 Heap Management Block format Allocators typically maintain header, optional padding Stepping beyond block bounds can really mess up allocator size Format of allocated and free blocks payload a = 1: allocated block a = 0: free block size: block size payload: application data (allocated blocks only) a optional padding header

76 9.10: Garbage Collection Related to dynamic memory allocation  Garbage collection: automatically reclaiming allocated blocks that are no longer used  Need arises when blocks are not explicitly freed  Also a classic CS problem

77 9.11: Memory-Related Bugs Selected highlights  Dereferencing bad pointers  Reading uninitialized memory  Overwriting memory  Referencing nonexistent variables  Freeing blocks multiple times  Referencing freed blocks  Failing to free blocks

78 Dereferencing Bad Pointers The classic scanf bug scanf(“%d”, val);

79 Reading Uninitialized Memory Assuming that heap data is initialized to zero /* return y = Ax */ int *matvec(int **A, int *x) { int *y = malloc(N*sizeof(int)); int i, j; for (i=0; i<N; i++) for (j=0; j<N; j++) y[i] += A[i][j]*x[j]; return y; }

80 Overwriting Memory Allocating the (possibly) wrong sized object int **p; p = malloc(N*sizeof(int)); for (i=0; i<N; i++) { p[i] = malloc(M*sizeof(int)); }

81 Overwriting Memory Off-by-one error int **p; p = malloc(N*sizeof(int *)); for (i=0; i<=N; i++) { p[i] = malloc(M*sizeof(int)); }

82 Overwriting Memory Not checking the max string size Basis for classic buffer overflow attacks  1988 Internet worm  Modern attacks on Web servers  AOL/Microsoft IM war char s[8]; int i; gets(s); /* reads “123456789” from stdin */

83 Overwriting Memory Referencing a pointer instead of the object it points to  Code below intended to remove first item in a binary heap of *size items, then reheapify the remaining items.  Problem: * and -- have equal precedence, associate r to l  Programmer intended (*size)--  Compiler interprets as *(size--) int *BinheapDelete(int **binheap, int *size) { int *packet; packet = binheap[0]; binheap[0] = binheap[*size - 1]; *size--; Heapify(binheap, *size, 0); return(packet); }

84 Other Pointer Pitfalls Misunderstanding pointer arithmetic  Code below intended to scan array of ints and return a pointer to the first occurrence of val. int *search(int *p, int val) { while (*p && *p != val) p += sizeof(int); return p; }

85 Referencing Nonexistent Variables Forgetting that local variables disappear when a function returns int *foo () { int val; return &val; }

86 Freeing Blocks Multiple Times Nasty! x = malloc(N*sizeof(int)); /* do some stuff with x */ free(x); y = malloc(M*sizeof(int)); /* do some stuff with y */ free(x);

87 Referencing Freed Blocks Evil! x = malloc(N*sizeof(int)); /* do some stuff with x */ free(x);... y = malloc(M*sizeof(int)); for (i=0; i<M; i++) y[i] = x[i]++;

88 Failing to Free Blocks (Memory Leaks) Slow, long-term killer! foo() { int *x = malloc(N*sizeof(int));... return; }

89 Failing to Free Blocks (Memory Leaks) Freeing only part of a data structure struct list { int val; struct list *next; }; foo() { struct list *head = malloc(sizeof(struct list)); head->val = 0; head->next = NULL; /* create, manipulate rest of the list */... free(head); return; }

90 Before You Sell Book Back… Consider useful content of remaining chapters Chapter 10: System level I/O  Unix file I/O  Opening and closing files  Reading and writing files  Reading file metadata  Sharing files  I/O redirection  Standard I/O

91 Chapter 11 Network programming  Client-server programming model  Networks  Global IP internet: IP addresses, domain names, DNS servers  Sockets  Web servers

92 Chapter 12 Concurrent programming  CP with processes (e.g., fork, exec, waitpid)  CP with I/O multiplexing  Ask kernel to suspend process, returning control when certain I/O events have occurred  CP with threads, shared variables, semaphores for synchronization

93 Class Wrap-up Final exam in testing center: both days of finals  Check testing center hours, days!  50 multiple choice questions  Covers all chapters (4-7 questions each from chapters 2-9)  3 hour time limit: but unlikely to use all of it  Review midterm solutions, chapter review questions  Remember:  Final exam score replaces lower midterm scores  4 low quizzes will be dropped in computing overall quiz score Assignments  Deadline for late labs is tomorrow (15 June).  Notify instructor immediately if you are still working on a lab  All submissions from here on out: send instructor an email

94 Reminder + Request From class syllabus:  Must complete all labs to receive passing grade in class  Must receive passing grade on final to pass class Please double check all scores on Blackboard  Contact TA for problems with labs, homework  Contact instructor for problems with posted exam or quiz scores

95 Parting Thought Again and again I admonish my students both in Europe and in America: “Don’t aim at success – the more you aim at it and make it a target, the more you are going to miss it. For success, like happiness, cannot be pursued; it must ensue, and it only does so as the unintended side-effect of one’s personal dedication to a cause greater than oneself or as the by-product of one’s surrender to a person other than oneself. Happiness must happen, and the same holds for success: you have to let it happen by not caring about it. I want you to listen to what your conscience commands you to do and go on to carry it out to the best of your knowledge. Then you will live to see that in the long run – in the long run, I say! – success will follow you precisely because you had forgotten to think of it.” Viktor Frankl

96


Download ppt "Virtual Memory (Review) Programs refer to virtual memory addresses  movl (%ecx),%eax  Conceptually very large array of bytes  Each byte has its own."

Similar presentations


Ads by Google