Presentation is loading. Please wait.

Presentation is loading. Please wait.

OSes: 8. Mem. Mgmt. 1 Operating Systems v Objectives –describe some of the memory management schemes used by an OS so that several processes can be in.

Similar presentations


Presentation on theme: "OSes: 8. Mem. Mgmt. 1 Operating Systems v Objectives –describe some of the memory management schemes used by an OS so that several processes can be in."— Presentation transcript:

1 OSes: 8. Mem. Mgmt. 1 Operating Systems v Objectives –describe some of the memory management schemes used by an OS so that several processes can be in memory (RAM) at once Certificate Program in Software Development CSE-TC and CSIM, AIT September -- November, 2003 8. Memory Management (Ch. 8, S&G) ch 9 in the 6th ed.

2 OSes: 8. Mem. Mgmt. 2 Contents 1.Background 2.Logical vs. Physical Address Spaces 3.Swapping 4.Partition Allocation 5.Paging 6.Segmentation

3 OSes: 8. Mem. Mgmt. 3 1. Background Fig. 8.1, p.241; VUW CS 305 Source Compiler Object Linker Load Module Loader Executable Image Object Libraries System Libraries Dynamic Libraries compile tmeload timeexecution

4 OSes: 8. Mem. Mgmt. 4 1.1. Address Binding v compile time –the compiler knows where a process will reside in memory so generates absolute code (code that starts at a fixed address) v load time –the compiler does not know where a process will reside so generates relocatable code (its starting address is determined at load time) continued

5 OSes: 8. Mem. Mgmt. 5 v execution time –the process can move during its execution, so its starting address is determined at run time

6 OSes: 8. Mem. Mgmt. 6 1.2. Dynamic Loading v A routine (function, procedure, library, etc.) is not loaded until is is called by another program –the routine must be stored as relocatable code v Unused routines are never loaded –saves space

7 OSes: 8. Mem. Mgmt. 7 1.3. Dynamic Linking v Often used for system libraries –e.g. Dynamically Linked Libraries (DLLs) v The first call to a DLL from a program causes the linking in and loading of the libraries –i.e. linking/loading is determined at run time continued

8 OSes: 8. Mem. Mgmt. 8 v Allows libraries to be changed/moved since linking/loading information is not fixed (as much) in the calling program.

9 OSes: 8. Mem. Mgmt. 9 1.4. Overlays v Keep in memory only those pieces of code that are needed at any given time –saves space v Example: a two-pass assembler –Pass 170K Pass 280K Symbol table20K Common routines30K continued

10 OSes: 8. Mem. Mgmt. 10 v Loading everything requires 200K v Use two overlays: –A: symbol table, common routines, pass 1 u requires 120K –B: symbol table, common routines, pass 2 u requires 130K

11 OSes: 8. Mem. Mgmt. 11 3. Logical vs. Physical Address Space v The user sees a logical view of the memory used by their code –in one piece, unmoving –sometimes called a virtual address space v This is mapped to the physical address space –code may be located in parts/partition –some parts may be in RAM, others in backing store continued

12 OSes: 8. Mem. Mgmt. 12 v The mapping from logical to physical is done by the Memory Management Unit (MMU). v Hiding the mapping is one of the main aims of memory management schemes.

13 OSes: 8. Mem. Mgmt. 13 Example: Dynamic Relocation Fig. 8.3, p.246 relocation register 14000 + MMU CPU logical address 346 physical address 14346 memory

14 OSes: 8. Mem. Mgmt. 14 3. Swapping v If there is not enough memory (RAM) for all the ready processes then some of them may be swapped out to backing store. v They will be swapped back in when the OS can find enough memory for them. continued

15 OSes: 8. Mem. Mgmt. 15 P1P1 P2P2 OS swap out swap in user space Fig. 8.4, p.247 continued Diagram

16 OSes: 8. Mem. Mgmt. 16 v With compile time / load time address binding, the process must be swapped back to its old location –no such need for processes with execution time address binding v Must be careful not to swap out a process that is waiting for I/O.

17 OSes: 8. Mem. Mgmt. 17 The Ready Queue v The ready queue consists of: –processes in memory that are ready to run –processes swapped out to backing store that are ready to run u they will need to be swapped back in if they are chosen by the CPU scheduler

18 OSes: 8. Mem. Mgmt. 18 Swap Times v The speed of swapping affects the design of the CPU scheduler –the amount of execution time it gives to a process should be much greater than the swap time to bring it into memory

19 OSes: 8. Mem. Mgmt. 19 Swap Time Example v Assume a transfer rate of 1MB/sec. v A transfer of 100K process will take: –100/1000 = 100 ms v Total swap time: = transfer out + transfer in + (2 * latency) = 100 + 100 + (2 * 8) = 216 ms

20 OSes: 8. Mem. Mgmt. 20 4. Partition Allocation v Divide memory into a fixed no. of partitions, and allocate a process to each. v Partitions can be different fixed sizes. continued 16 3264128

21 OSes: 8. Mem. Mgmt. 21 v Processes are allocated to the smallest available partition. v Internal fragmentation will occur:

22 OSes: 8. Mem. Mgmt. 22 4.1. Variable-size Partitions Fig. 8.7, p.252 OS user space 0 400K 2560K 2160K processmemorytime P1 600K 10 P2 1000K 5 P3 300K 20 P4 700K 8 P5 500K 15 Job Queue

23 OSes: 8. Mem. Mgmt. 23 Memory Allocation Fig. 8.8., p.253 OS 0 400K 2560K P1 P2 P3 1000K 2000K 2300K P2 ends OS 0 400K 2560K P1 P3 1000K 2000K 2300K P4 allocated continued

24 OSes: 8. Mem. Mgmt. 24 OS 0 400K 2560K P1 P3 1000K 2000K 2300K P1 ends OS 0 400K 2560K P3 1000K 2000K 2300K P5 allocated continued P4 1700K P4 allocated P4 1700K

25 OSes: 8. Mem. Mgmt. 25 OS 0 400K 2560K P3 1000K 2000K 2300K P5 allocated P4 1700K P5 900K external fragmentation develops

26 OSes: 8. Mem. Mgmt. 26 4.2. Dynamic Storage Allocation v Where should a process of size N be stored when there are a selection of partitions/holes to choose from? v First fit –allocate the first hole that is big enough –fastest choice to carry out continued

27 OSes: 8. Mem. Mgmt. 27 v Best fit –allocate the smallest hole that is big enough –leaves smallest leftover hole v Worst fit –allocate the largest hole –leaves largest leftover hole

28 OSes: 8. Mem. Mgmt. 28 4.3. Compaction Fig. 8.10, p.255 OS 0 400K 2560K P3 1000K 2000K 2300K P4 1700K P5 900K OS 0 400K 2560K P3 1900K P4 1600K P5 900K

29 OSes: 8. Mem. Mgmt. 29 Different Strategies Fig. 8.11, p.256 OS 0 300K 2100K P4 600K 1500K 1900K P3 1200K P1 500K original allocation P2 1000K moved 600K OS 0 300K 2100K P4 600K P3 1200K P1 500K P2 800K Version 1

30 OSes: 8. Mem. Mgmt. 30 OS 0 300K 2100K P4 600K 1500K 1900K P3 1200K P1 500K original allocation P2 1000K moved 400K OS 0 300K 2100K P3 600K P4 1200K P1 500K P2 1000K Version 2 Or:

31 OSes: 8. Mem. Mgmt. 31 OS 0 300K 2100K P4 600K 1500K 1900K P3 1200K P1 500K original allocation P2 1000K moved 200K OS 0 300K 2100K P3 600K P4 1900K P1 500K P2 1500K Version 3 Or:

32 OSes: 8. Mem. Mgmt. 32 5. Paging v Divide up the logical address space of a process into fixed size pages. v These pages are mapped to same size frames in physical memory –the frames may be located anywhere in memory

33 OSes: 8. Mem. Mgmt. 33 5.1. The Basic Method v Each logical address has two parts: v A page table contains the mapping from a page number to the base address of its corresponding frame. v Each process has its own page table –stored in its PCB

34 OSes: 8. Mem. Mgmt. 34 Paging Hardware Fig. 8.12, p.258 CPU logical address physical address physical memory pdfd f p

35 OSes: 8. Mem. Mgmt. 35 5.2. Size of a Page v The size of a page is typically a power of 2: –512 (2 9 ) -- 8192 (2 13 ) bytes v This makes it easy to split a machine address into page number and offset parts. continued

36 OSes: 8. Mem. Mgmt. 36 v For example, assume: –the address space is 2 m bytes large –a page can be 2 n bytes in size (n < m) v The logical address format becomes: page number npage offset d m-nn

37 OSes: 8. Mem. Mgmt. 37 5.3. Example v Address space is 32 bytes (2 5 ) v Page size: 4 bytes (2 2 ) v Therefore, there can be 8 pages (2 3 ) v Logical address format: p.258 page number npage offset d 32

38 OSes: 8. Mem. Mgmt. 38 Fig. 8.14, p.260 0 a 1 b 2 c 3 d 4 e 5 f 6 g 7 h 8 i 9 j 10 k 11 l 12 m 13 n 14 o 15 p logical memory 5 6 1 2 0 1 2 3 page table 0 4 i j k l 8 m n o p 12 16 20 a b c d 24 e f g h 28 physical memory

39 OSes: 8. Mem. Mgmt. 39 Using the Page Table v Logical AddressPhysical Address 0 (5*4) + 0 = 20 3 (5*4) + 3 = 23 5 (6*4) + 1 = 25 14 (2*4) + 2 = 10

40 OSes: 8. Mem. Mgmt. 40 5.4. Features of Paging v No external fragmentation –any free frame can be used by a process v Internal fragmentation can occur –small pages or large pages? v There is a clear separation between logical memory (the user’s view) and physical memory (the OS/hardware view).

41 OSes: 8. Mem. Mgmt. 41 5.5. Performance Issues v Every access must go through a page table  Small page table  registers  Large page table  table in memory –a memory access requires indexing into the page table and a memory access v Translation Look-aside Buffer (TLB) –sometimes called an “associative cache”

42 OSes: 8. Mem. Mgmt. 42 5.6. Paging with a TLB Fig. 8.16, p.264 CPU logical address physical address physical memory pd fd f p TLB TLB hit TLB miss p f

43 OSes: 8. Mem. Mgmt. 43 Performance v Assume: –memory access takes 100 nsec –TLB access takes 20 nsec v A 80% hit rate: –effective access time is =(0.8 * 120) + (0.2 * 220) =140 nsec –40% slowdown in the memory access time TLB + page table + memory access continued

44 OSes: 8. Mem. Mgmt. 44 v A 98% hit rate: –effective access time is =(0.98 * 120) + (0.02 * 220) =122 nsec –22% slowdown in the memory access time

45 OSes: 8. Mem. Mgmt. 45 5.7. Multilevel Paging v In modern systems, the logical address space for a process is very large: –2 32, 2 64 bytes –virtual memory allows it to be bigger than physical memory (explained later) –the page table becomes too large

46 OSes: 8. Mem. Mgmt. 46 Example v Assume: –a 32 bit logical address space –page size is 4K bytes (2 12 ) v Logical address format: page number npage offset d 2012 continued

47 OSes: 8. Mem. Mgmt. 47 v The page table must store 2 20 addresses (~ 1 million), each of size 32 bits (4 bytes) –4 MB page table –too big v Solution: use a two-level paging scheme to make the page tables smaller.

48 OSes: 8. Mem. Mgmt. 48 Two-level Paging Scheme v In essence: “page the page table” –divide the page table into two levels v Logical address format: –p1 = index into the outer page table –p2 = index into the page obtained from the outer page table page number page offset dp1p2

49 OSes: 8. Mem. Mgmt. 49 Diagram Fig. 8.18, p.266 : 1 500 : 100 708 : 929 900 : : pg 0 pg 1 pg 100 pg 500 pg 708 pg 929 pg 900 : : : : : : memory pages page tables : outer page table

50 OSes: 8. Mem. Mgmt. 50 Address Translation Fig. 8.19, p.267 p1 p2 d desired page page table outer page table p1p2d logical address

51 OSes: 8. Mem. Mgmt. 51 Two-level Page Table Sizes v Assume the logical address format: v The outer page table (and other page tables) will contain 2 10 addresses (~1000), each of size 32-bits (4 bytes) –table size = 4K page number page offset dp1p2 10 12

52 OSes: 8. Mem. Mgmt. 52 Three-level Paging Scheme v For a 64-bit logical address space, three levels may be requires to make the page table sizes manageable. v Possible logical address format: page number page offset dp1p2 10 12 p3 32 continued

53 OSes: 8. Mem. Mgmt. 53 v But now the second outer page table (the new p1) will have to store 2 32 addresses –go to a four-level paging scheme!

54 OSes: 8. Mem. Mgmt. 54 Three-level Paging Slowdown v Assume: –there is a TLB cache, with access time 20 nsec –memory access takes 100 nsec v A 98% hit rate: –effective access time is =(0.98 * 120) + (0.02 * 420) =126 nsec –26% slowdown in the memory access time TLB + 3 levels + memory access

55 OSes: 8. Mem. Mgmt. 55 5.8. Inverted Page Table v A page table maps each page of a process to a physical frame. v If virtual memory is used, many tables may refer to the same frames u virtual memory is explained in the next chapter v Each table may have millions of entries. continued

56 OSes: 8. Mem. Mgmt. 56 v An inverted page table has one entry for each frame, which says which PID (process ID) and page are using it (currently) –reduces the amount of physical memory required to store page  frame mappings v A logical address is represented by:

57 OSes: 8. Mem. Mgmt. 57 Inverted Page Table Diagram Fig. 8.20, p.270 CPU logical address physical address physical memory pidd id i p p search inverted page table

58 OSes: 8. Mem. Mgmt. 58 Drawbacks v Slow linear search time over the inverted page table –use hashing –use TLBs for recent accesses v Still need (something like) ordinary page tables to record which pages are currently swapped out to backing store.

59 OSes: 8. Mem. Mgmt. 59 5.9. Shared Pages v Ordinary page tables allow frames to be shared –useful for reusing reentrant code (i.e. code that does not modify itself) v Example –three users of an editor (150K, split into three pages) and their data (50K each, one page each)

60 OSes: 8. Mem. Mgmt. 60 Editor Usage Fig. 8.21, p.271 data 1 memory data 3 ed 1 ed 2 ed 3 data 2 : 0 1 2 3 4 5 6 7 8 9 10 ed 1 ed 2 ed 3 data 1 3 4 6 1 P1 page table ed 1 ed 2 ed 3 data 3 3 4 6 2 P3 page table ed 1 ed 2 ed 3 data 2 3 4 6 7 P2 page table

61 OSes: 8. Mem. Mgmt. 61 Memory Savings v Total physical memory usage (with sharing): = 150 + (3 * 50)= 300K v Total physical memory usage (without sharing): = 3 * (150 + 50)= 600K

62 OSes: 8. Mem. Mgmt. 62 Problems v Sharing relies on being able to map several pages (logical memory) to a single frame (physical memory). –not possible with an inverted page table which only allows one page to be associated with one frame

63 OSes: 8. Mem. Mgmt. 63 6. Segmentation v A user’s view of memory: Fig. 8.22, p.272 subroutine stack symbol table main program sqrt() continued

64 OSes: 8. Mem. Mgmt. 64 v A logical address consists of: v A compiler can create separate segements for the distinct parts of a program: –e.g. global variables, call stack, code for each function

65 OSes: 8. Mem. Mgmt. 65 Segmentation Hardware Fig. 8.23, p.274 CPU physical memory d s limitbase s segment table < + yes no trap: addressing error

66 OSes: 8. Mem. Mgmt. 66 Example Fig. 8.24, p.275 subroutine stack symbol table main program sqrt() S0 S3 S1 S2 S4 limitbase 0 10001400 1 4006300 2 4004300 3 11003200 4 10004700 S0 S3 S2 S4 S1 logical address space segment table 1400 2400 3200 4300 4700 5700 6300 6700 physical memory

67 OSes: 8. Mem. Mgmt. 67 Advantages v Segments can be used to store distinct parts of a program, so it is easier to protect them –e.g. make function code read-only

68 OSes: 8. Mem. Mgmt. 68 Sharing of Segments Fig. 8.25, p.277 editor data 1 S0 S1 limitbase 0 2528643062 1 442568348 editor data 1 data 2 logical memory for P1 segment table for P1 43062 68348 72773 90003 98553 editor data 2 S0 S1 limitbase 0 2528643062 1 885090003 logical memory for P2 segment table for P2


Download ppt "OSes: 8. Mem. Mgmt. 1 Operating Systems v Objectives –describe some of the memory management schemes used by an OS so that several processes can be in."

Similar presentations


Ads by Google