Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 8, Main Memory 1. 8.1 Background When a machine language program executes, it may cause memory address reads or writes From the point of view.

Similar presentations


Presentation on theme: "Chapter 8, Main Memory 1. 8.1 Background When a machine language program executes, it may cause memory address reads or writes From the point of view."— Presentation transcript:

1 Chapter 8, Main Memory 1

2 8.1 Background When a machine language program executes, it may cause memory address reads or writes From the point of view of memory, it is of no interest what the program is doing All that is of concern is how the program/operating system/machine manage access to the memory 2

3 Address binding The O/S manages an input queue in secondary storage of jobs that have been submitted but not yet scheduled The long term scheduler takes jobs from the input queue, triggers memory allocation, and puts jobs into physical memory PCB’s representing the jobs go into the scheduling system’s ready queue 3

4 The term memory address binding refers to the system for determining how memory references in programs are related to the actual physical memory addresses where the program resides In short, this aspect of system operation stretches from the contents of high level language programs to the hardware the system is running on 4

5 1. In high level language programs, memory addresses are symbolic. Variable names make no reference to an address space, but the values they contain occupy physical memory 2. When a high level language program is compiled, typically the compiler generates relative addresses. This means that the numbering of the lines of machine code starts at 0, and the operands of instructions which access program memory do so by line number as an offset from a base address of 0. 5

6 3. An operating system includes a loader/linker. This is part of the long term scheduler functionality. When the program is placed in memory, assuming (as is likely) that it’s base load address is not 0, the relative addresses it contains don’t agree with the physical addresses is occupies A simple approach to solving this problem is to have the loader/linker convert the relative addresses of a program to absolute addresses at load time. Absolute addresses are the actual physical addresses where the program resides 6

7 Note the underlying assumptions of this scenario 1. Programs can be loaded into arbitrary memory locations 2. Once loaded, the locations of programs in memory don’t change 7

8 There are several different approaches to binding memory access in programs to actual locations 1. Binding can be done at compile time If it’s known in advance where in memory a program will be loaded, the compiler can generate absolute code 8

9 2. Binding can be done at load time This was the simple approach described earlier The compiler generates relocatable code The loader converts the relative addresses to actual addresses at the time the program is placed into memory. 9

10 3. Binding can be done at execution time This is the most flexible approach Relocatable code (containing relative addresses) is actually loaded At run time, the system converts each memory reference to a real address Implementing such a system removes the restriction that a program is always in the same address space This kind of system supports advanced memory management systems like paging and virtual memory, which are the advanced topics of the memory chapters In simple terms, you see that this kind of system supports medium term scheduling, where a job can be offloaded and reloaded without needing either to reload it to the same address or go through the address binding process again 10

11 The following diagram shows the various steps involved in getting a user written piece of high level code into a system and running 11

12 12

13 Logical vs. physical address space The address generated by a program running on the CPU is a logical address The address that actually gets manipulated in the memory management unit of the CPU— that ends up in the memory address register— is a physical address Under compile time or load time binding, the logical and physical addresses are the same 13

14 Under execution time binding, the logical and physical addresses differ Logical addresses can be called virtual addresses. The book uses the terms interchangeably Overall, the physical memory belonging to a program can be called its physical address space The complete set of possible memory references of a program can be called its logical or virtual address space 14

15 For efficiency, memory management in real systems is supported in hardware The mapping from logical to physical is done by the memory management unit (MMU) In the simplest of schemes, the MMU contains a relocation register This register contains the base address, or offset into main memory, where a program is loaded Converting from a relative address to an absolute address means adding the relative address to the contents of the relocation register 15

16 When a program is running, every time an instruction makes reference to a memory address, the relative address is passed to the MMU The MMU is transparent. It does everything necessary to convert the address For a simple read, for example, the MMU returns the value found at the converted address For a simple write, the MMU takes the given value and writes it to the current address All other memory access instructions are handled similarly An illustrative diagram of MMU functionality follows 16

17 Memory management unit functionality with relative addresses 17

18 Although the simple diagram doesn’t show it, address references can still be out of range However, the point is that under relative addressing, the program lives in its own virtual world The program deals only in logical address while the system handles mapping them to physical addresses 18

19 The previous discussion illustrated addressing in a very basic way What follows are some historical enhancements, some of which led to the characteristics of complete, modern memory management schemes Dynamic loading is a precursor to paging, but it isn’t efficient enough for a modern environment It is reminiscent of medium term scheduling 19

20 Dynamic loading – One of the assumptions so far has been that a complete program had to loaded into memory in order to run – Consider the following scenario – 1. Separate routines are stored on the disk in relocatable format – 2. When a routine is called, first it’s necessary to check if it’s already been loaded. If so, control is transferred to it – 3. If not, the loader immediately loads it and updates its address tables 20

21 Dynamic linking and shared libraries To understand dynamic linking, consider what static linking would mean If every user program that used a system library had to have a copy of the system code bound into it, that would be static linking This is clearly inefficient. Why make multiple copies of shared code in loaded program images? 21

22 Under dynamic linking, a user program contains a special stub where system code is called At run time, when the stub is encountered, a system call checks to see whether the needed code has already been loaded by another program If not, the code is loaded and execution continues If the code was already loaded, then execution continues at the address where the system had loaded it 22

23 Dynamic linking of system libraries supports both transparent library updates and the use of different library versions If user code is dynamically linked to system code, if the system code changes, there is no need to recompile the user code. The user code doesn’t contain a copy of the system code 23

24 If different versions of libraries are needed, this is straightforward Old user code will use whatever version was in effect when it was written New versions need new names, and new user code can be written to use the new version However, if it is desirable for old user code to use the new library version, the old user code will have to be changed so that the stub refers to the new rather than the old 24

25 Obviously, the ability to do this is all supported by system functionality The fundamental functionality, from the point of view of memory management, is shared access to common memory In general, the memory space belonging to one process is disjoint from the memory space belonging to another However, the system may include access to a shared system library in the virtual memory space of more than one user process 25

26 Overlays This is another technique that is very old and has little modern use It is possible that it would have some application in environments where physical memory was extremely limited 26

27 Suppose a program ran sequentially and could be broken into two halves where no loop or if reached from the second half back to the first Suppose that the system provided a facility so that a running program could load an executable image into its memory space This is reminiscent of forking where the fork() is followed by an exec() 27

28 If those requirements were met and memory was large enough to hold half of the program but not all of the program Write the first half and have it conclude by loading the second half This is not simple to do, it requires system support, it certainly won’t solve all of your problems, and it would be prone to mistakes However, something like this may be necessary if memory is tiny and the system doesn’t support advanced techniques like paging and virtual memory 28

29 8.2 Swapping Keep this distinct from switching, which refers to switching loaded processes on and off of the CPU Swapping is similar to what a medium term scheduler does Elements of swapping existed in early versions of Windows Swapping continues to exist in Unix environments 29

30 Execution images for >1 job may be in memory If the long term scheduler picks a job from the input queue and there isn’t enough memory for it, swap out the image of one that had been loaded but is currently inactive Medium term scheduling does something like this, but on the grounds that the multi- programming level is too high 30

31 Swapping is implemented because memory space is limited Note that neither is suitable for interactive type processes Swapping is slow because it writes to a swap space in secondary storage Medium term scheduling or swapping are useful as a protection against limited resources However, transferring back and forth from the disk is definitely not a time-effective strategy for supporting multi-programming on a modern system 31

32 8.3 Contiguous Memory Allocation Along with the other assumptions made so far, such as the fact that all of a program has to be loaded into memory, another assumption is made In simple systems, the whole program is loaded, in order, from beginning to end, in one block of physical memory 32

33 Referring back to earlier chapters, the interrupt vector table is assigned a fixed memory location O/S code is assigned a fixed location User processes are allocated contiguous blocks in the remaining free memory Valid memory address references for relocatable code are determined by a base address and a limit value 33

34 The base address corresponds to relative address 0 The limit tells the amount of memory allocated to the program In other words, the limit corresponds to the largest valid relative address The following diagram illustrates the MMU in more detail under these assumptions The limit register contains the maximum relative address value. The relocation register contains the base address allocated to the program Keep in mind that when context switching, these registers are among those that the dispatcher sets 34

35 Memory management unit functionality with relative addresses and contiguous memory allocation with limit and relocation registers 35

36 Memory allocations A simple scheme for allocating memory is to give processes fixed size partitions A slightly more efficient scheme would vary the partition size according to the program size The O/S keeps a table or list of free and allocated memory Part of scheduling becomes determining whether there is memory enough to load a job 36

37 Under contiguous allocation, that means finding out whether there is a “hole” (window of free memory) large enough for the job If there is a large enough hole, in principle, that makes things “easy” (stay tuned) If there isn’t a large enough hole you have two choices: A. Let the process currently being scheduled wait until there is B. Let the scheduler set that job aside and search for jobs in the input queue that are small enough to fit into available holes 37

38 The dynamic storage allocation problem This is a classic problem of memory management It is the problem that results when there is enough contiguous memory to allow a process to be loaded The question is how to choose 38

39 Historically, three algorithms have been considered 1. First fit: Put a process into the first hole found that’s big enough for it. This is fast and allocates memory efficiently 2. Best fit: Look for the hole closest in size to what’s needed. This is not as fast and it’s not clearly better in allocation 39

40 3. Worst fit: This essentially means, load the job into the largest available hole. In practice it performs as well as its name, but see the next bullet External fragmentation describes the situation when memory has been allocated to processes leaving lots of unusable small holes of wasted space Even though it doesn’t work, the idea behind worst fit was to leave usable size holes 40

41 Empirical studies have shown that for an amount of allocated memory measured as N, an amount of memory approximately equal to.5N will be lost due to fragmentation This is known as the 50% rule In other words, about 1/3 of memory is wasted 41

42 In reality, memory is typically allocated in fixed size blocks rather than exact byte counts corresponding to process size The overhead of keeping track of arbitrary amounts of memory measuring in the scores of bits is not practical A block may consist of 1KB or some other measure of similar magnitude or larger 42

43 Under this scheme, a process is allocated enough blocks to contain the whole program External fragmentation still results, but the smallest hole will be one block Internal fragmentation also results. This refers to the wasted memory in the last block allocated to a process Internal fragmentation on average is equal to ½ of the block size 43

44 Picking a block size is a classic case of balancing extremes If block size is large enough, each process will only need one block. This degenerates into fixed partitions for processes, with large waste due to internal fragmentation If block size is small enough, you approach allocating byte by byte—internal fragmentation is insignificant, but external fragments can become small enough to be unusable 44

45 Compacting memory in order to reduce fragmentation If programs use absolute memory addresses, they simply can’t be relocated. Memory couldn’t be compacted without recompiling the programs. This is out of the question If programs use relative memory addresses, they are relocatable. Even during run time, they can be moved to new memory locations, squeezing the unusable fragments out of the memory allocations 45

46 8.4 Paging Paging deals with two problems: 1. It is a way to allocate memory in non- contiguous blocks, which addresses the problem of external fragmentation 2. It also deals with fragmentation in the swap space in secondary storage, where re- organization time would be so slow that compaction is not practical 46

47 Paging is based on the idea that the O/S can maintain data structures that match given blocks in physical memory with given ranges of virtual addresses Physical memory is conceptually broken into fixed size frames Logical memory is broken into pages of the same size In essence, the O/S maintains a lookup table telling which logical page matches with which physical frame 47

48 In contiguous memory allocation there was a limit register and a relocation register In paging there are special registers for placing the logical address and forming the physical address In paging, fixed page sizes mean that the limits are always the same, but there is a table containing the relocation values telling which frame each page address is relocated to 48

49 Every (logical) address generated by the CPU takes this form: Page part (p) | offset part (d) More specifically, let an address consist of m bits Then a logical address can be pictured as shown on the next overhead 49

50 50

51 The addresses are binary numbers As a result, the components of the address fit neatly together The (m – n) digits for p can be treated separately as a page number in the range from 0 to 2 (m – n) – 1 The n digits for d can be treated separately as an offset in the range from 0 to 2 n – 1 The m digits altogether give a single address in the range from 0 to 2 m – 1 In short, the address space consists of 2 m pages The size of a page is 2 n bytes 51

52 Paging is based on maintaining a page table For some value p, the corresponding f value is looked up in the page table at offset p in the table The offset d, is unchanged The physical address is formed by appending the binary value for d to the binary value for f The result is f | d The forming of a physical address from a logical address, p | d, using a page table, is illustrated in the following diagram 52

53 53

54 In theory you could have a global page table containing entries for all processes In practice, each process may have its own page table which is used when that process is scheduled The use of the page table can be illustrated with a simple example with a single process Each page table entry is like a base and offset for a given page in the process 54

55 55

56 Note again that under paging there is no external fragmentation Every empty physical memory space is a usable frame Internal fragmentation will average one half of a frame per process 56

57 In modern systems page sizes vary in the range of around 512 bytes to 16MB The smaller the page size, the smaller the internal fragmentation However, if the memory space is large, there is overhead in allocating small pages and maintaining a page table with lots of entries As hardware resources have become less costly, larger memory spaces have become available, and page sizes have grown Page sizes of 2K-8K may be considered representative of an average, modern system 57

58 Summary of paging ideas 1. The logical view of the address space is separate from the physical view. This means that code is relocatable, not absolute 2. The logical view is of contiguous memory. Paging is completely hidden by the MMU. Allocation of frames is not contiguous 58

59 3. Although the discussion has been in terms of the page table, in reality there is also a global frame table. The frame table provides the system with ready look-up of which frames have been allocated, and which are free and still available for allocation 4. There is a page table for each process. It keeps track of memory allocation from the process point of view and supports the translation from logical to physical addresses 59

60 Hardware support for paging A page table has to hold the mapping from logical pages to physical frames for a single process Note that the page table resides in memory The minimum hardware support for paging is a dedicated register on the chip which holds the address of the page table of the currently running process With this minimal support, for each logical memory address generated by a program, two accesses to actual memory would be necessary The first access would be to the page table, the second to the physical address located there 60

61 In order to be viable, paging needs additional hardware support. There are two basic choices 1. Have a complete set of dedicated registers for the page table. This is fast, but the hardware cost (monetary and real estate on the chip) becomes impractical if the memory space is large 61

62 2. The chip will contain hardware elements known as translation look-aside buffers (TLB’s). This is the current state of the art, and it will be explained below Translation look-aside buffers are in essence a special set of registers which support look-up. In other words, they are table-like. They are designed to contain keys, p, page identifiers, and values, f, the matching frame identifiers 62

63 TLB’s have an additional, special characteristic. They are not independent buffers. They come as a collection The “look-aside” part of the name is meant to suggest that when a search value is “dropped” onto the TLB, for all practical purposes, all of the buffers are searched for that value simultaneously. If the search value is present, the matching value is found within a fixed number of clock cycles In other words, look-up in a TLB does not involve linear search or any other software search algorithm. There is no order of complexity to searching depending on the number of entries in the collection of TLB’s. Response time is fixed and small 63

64 TLB’s are like a highly specialized cache The set of TLB’s wouldn’t be big enough to store a whole page table When a process starts accessing pages, this requires reading the page table and finding the frame Once a page has been read the first time, it’s entered into the TLB Subsequent reads to that page will not require reading from the page table in memory 64

65 Just like with caching, some process memory accesses will be a TLB “hit” and some will be a TLB “miss” A hit is very economical A miss requires reading the page table again and replacing (the LRU) entry in the TLB with the most recent page accessed Memory management with TLB’s is shown in the following diagrams 65

66 66

67 67

68 Note the following things about the diagram The page table is complete, so a search of the page table simply means jumping to offset p in the table The TLB is a subset, so it has to have both key, p, and look-up, f values in it It shows addressing, but it doesn’t attempt to show, through arrows or other notation, the replacement of TLB entries on a miss 68

69 Paging costs can be summarized in this way On a hit: TLB access + memory access On a miss: TLB access + memory access to page table + memory access to desired page The book states that typical TLB’s are in the range from 16 to 512 entries With this number of TLB’s, a hit ratio of 80%- 98% can be achieved 69

70 Given a hit ratio and some sample values for the time needed for TLB and memory access, weighted averages for the cost of paging can be calculated For example, let the time needed for a TLB search be 20 ns. Let the time needed for a main memory access be 100 ns. 70

71 Cost of TLB hit: 20 + 100 = 120 Cost of TLB miss: 20 + 100 + 100 = 220 Let the hit ratio be 80% Then the overall, weighted cost of paging is:.8(120) +.2(220) = 140 71

72 In other words, if you could always access memory directly, it would take 100 ns. With paging, it takes on average 140 ns. Paging imposes a 40% overhead on memory access On the other hand, without TLB’s, every memory access would cost 100 ns. + 100 ns., which would mean a 100% overhead on memory access 72

73 Why would you live with a 40% overhead cost on memory accesses? Remember the reasons for introducing the idea of paging: It allows for non-contiguous memory allocation This solves the problem of external fragmentation in memory As long as the page size strikes a balance between large and small, internal fragmentation is not great There is also a potential benefit in reducing fragmentation in swap space—but supporting contiguous memory allocation is the main event 73

74 The previous discussion has referred to a page table as belonging to one process This would mean there would be many page tables When a new process was scheduled, the TLB would be flushed so that pages belonging to the new process would be loaded. 74

75 The alternative is to have a single, unified page table This means that each page table entry, in addition to a value for f, would have to identify which process it belonged to The identifier is known as an ASID, an address space id 75

76 Such a table would work like this: When a process generated a page id, the TLB would be searched for that page If found, it would further be checked to see if the page belonged to the process If so, everything is good If not, this is simply a page miss Replacement would occur using the usual algorithm for replacement on a miss With a page table like this, there is no need for flushing when a new process is scheduled In effect, the TLB is flushed entry by entry as misses occur 76

77 Implementing protection in the page table Recall that a page table functions like a set of base and limit registers Each page address is a base, and the fixed page size functions as a limit If a system maintains page tables of length n, then the maximum amount of memory that could theoretically be allocated to a process is n pages, or n * (page length) bytes 77

78 In practice, processes do not always need the maximum amount of memory and will not be allocated that much This information can be maintained in the page table by the inclusion of a valid/invalid bit If a page table entry is marked “i”, this means that if a process generates that logical page, it is trying to access an address outside of the memory space that was allocated to it A diagram of the page table follows 78

79 79

80 An alternative to valid/invalid bits is a page table length register (PTLR) The idea is simple—this register is like a limit register for the page table The range of logical addresses for a given process begins at page 0 and goes to some maximum which is less than the absolute maximum size allowed for a page table When a process generates a page, it is checked against the PTLR to see if it’s valid 80

81 The valid/invalid bit scheme can be extended to support finer protections For example, read/write/execute protections can be represented by three bits You typically think of these protections as being related to a file system In theory, different pages of a process could have different attributes This may be especially important (and likely considerably more complicated in practice) if you are dealing with shared memory accessible to >1 process 81

82 8.5 Structure of the Page Table Modern systems may support address spaces in the range of 2 32 to 2 64 bytes 2 32 is 4 Gigabytes 2 64 ~= 18.4+ x 10 18 In any case, the higher value is what you get if you allow all 64 bits of a 64 bit architecture to be used as an address Note that this is 16 x 2 60, but by this stage the powers of 2 and the powers of 10 do not match up the way they do where we casually equate 2 10 to 10 3 82

83 According to Wikipedia Standard prefixes for the SI units of measure Multiples Name deca- hecto- kilo- mega- giga- tera- peta- exa- zetta- yotta- Symbol da h k M G T P E Z Y Factor 10 0 10 1 10 2 10 3 10 6 10 9 10 12 10 15 10 18 10 21 10 24 Subdivisions Name deci- centi- milli- micro- nano- pico- femto- atto- zepto- yocto- Symbol d c m µ n p f a z y Factor 10 0 10 −1 10 −2 10 −3 10 −6 10 −9 10 −12 10 −15 10 −18 10 −21 10 −24 83

84 The reality is that modern systems support logical address spaces too large for simple page tables In order to support these address spaces, hierarchical or multi-level paging is used Take the lower of the address spaces given above, 2 32 Let the page size be 2 12 or 4 KB 84

85 2 32 bytes of memory divided into pages of size 2 12 bytes means a total of 2 20 pages The corresponding physical address space would consist of 2 20 frames That means that each page table entry would have to be at least 20 bits long, in order to hold the frame id Suppose each page table entry is 4 bytes, or 32 bits, long This would allow for validity and protection bits in addition to the frame id It’s also simpler to argue using powers of 2 rather than speaking in terms of a table entry of length 3 bytes 85

86 A page table with 2 20 entries each of size 2 2 bytes means the page table is of length 2 22, or 4 MB But a page itself under this scenario was only 2 12, or 4 KB In other words, it would take 1 K of pages to hold the complete page table for a process that had been allocated the theoretical maximum amount of memory possible 86

87 To restate the result in another way, the page table won’t fit into a single page In theory, it might be possible to devise a hybrid system where the memory for page tables was allocated and addressed by the O/S as a monolithic block, while this was used to support paging of user memory This would be a mess and leads to questions like, could there be fragmentation in the monolithic page table block? 87

88 The practical solution to the problem is hierarchical or multi-level paging In one of its forms, it’s similar to indexing The book refers to this as a forward-mapped page table Given a logical page value, you don’t look up the frame id directly You look up another page that contains a page id for the page containing the desired frame id The book mentions that this kind of scheme was used by the Pentium II 88

89 The scheme is illustrated in the following diagrams A logical address of 32 bits can be divided into blocks of 10, 10, and 12 bits 10 + 10 = 20 bits correspond to the page identifier The remaining 12 bits correspond to d, the offset into a page of size 2 12 bytes 89

90 This is the form of a page address 90

91 This is how a logical address maps to a physical address through multiple levels 91

92 This shows the multiple layers of the page table 92

93 Calculating the cost of paging using a multi- level page table In preview, the cost of a miss will be about twice as high because there are two hits to the page table As before, let the time needed for a TLB search be 20 ns. Let the time needed for a main memory access be 100 ns. 93

94 Cost of TLB hit: 20 + 100 = 120 Cost of TLB miss: 20 + 100 + 100 + 100 = 320 The first 100 is the outer page table, the second 100 is the inner page table, the third 100 is the access to the desired address Let the hit ratio be 98% Then the overall, weighted cost of paging is:.98(120) +.02(320) = 124 The overhead cost of paging under this scheme is 24% 94

95 Observe what happens if you go to a 64 bit address space and a page size of 4KB Sample address breakdowns are shown on the next overhead for two and three level paging The thing to notice is that the number of bits is so high that you again have the problem that a level of the page table won’t fit into a single page 95

96 96

97 With an address space of this size, six levels would be needed Depending on page size, some 32 bit systems go to 3 or 4 levels For 64 bit address spaces, the multi-level paging is too deep Think of the cost of a miss in the weighted average for addressing 97

98 Hashed page tables--Hashing Hashed page tables provide an alternative approach to multi-level paging in a large address space The first thing you need to keep in mind is what hashing is, how it works, and what it accomplishes Let y = f(x) be a hashing function 98

99 You may have a widely dispersed set of n different x values in the domain You have a specific, compact set of y values that you want to map to in the range. In the ideal case, there would be a set of exactly n different, contiguous y values f() is devised so that the likelihood that any two x values will give the same y value is small In the ideal case, no two x values would ever collide 99

100 f() also has to be quick and easy to compute In practice the range will be somewhat larger than n and collisions may occur The most common kind of hashing function is based on division and remainders Choose z to be the smallest prime number larger than n Then let f(x) = z % x f(x) will fall into the range [0, z – 1] 100

101 Hashing makes it possible to create a look-up table that doesn’t require an index or any sorting or searching Let there be z – 1 entries in the table Store the entry for x at the offset f(x) in the table When x occurs again and you want to look up the corresponding value in the table, compute f(x) and read the entry at that offset Note that the value, x, is repeated in the table entry This is necessary in order to resolve collisions This is illustrated in the following diagram 101

102 102

103 Hashed page tables—Why? Consider again the background of multi-level paging and its disadvantages Conceivably you could be maintaining a global page table or a page table for each process Since memory is being accessed page by page, it’s desirable for a large page table itself to be accessible by page As the address space grows large, it becomes impossible to store a complete page table in one page 103

104 A multi-level page table provides a tree-like way of using pages to access memory addresses The important thing to note is that each level in the tree corresponds to a block of bits in an address The larger the address space, the more levels in the tree, the more memory accesses to arrive at the desired address 104

105 The important thing to note is this: This structure provides a way of accessing the whole address space Now consider this: It is possible to have a 64 bit architecture machine, for example, without having 2 64 bytes of installed memory Even if you have maximum memory installed, it would not be in order to accommodate a single process that required that much memory The purpose would be to support multi-tasking, with each process getting a portion of the memory 105

106 Now note this: Even if a process got only a part of memory, the frames allocated to it could be dispersed across the whole address space In other words, a single process might use the address space very sparsely, and there is no way to confine it to a fixed subset of frames 106

107 Now, for the sake of argument, assume that the page size of a system is large enough that a page table that can be contained in one page would be the maximum amount of memory that could be allocated to one process The system would still have to maintain a global record of all process/page/frame assignments However, hashing makes it possible to store the mapping for a single process in one page 107

108 In summary, making a hashed page table involves the following: When a virtual page is allocated a frame, the virtual page id, p, is hashed to a location in the hash table The hash table entry contains p, to account for collisions, and the id of the allocated frame See the following diagram 108

109 109

110 In this illustration, a collision is shown Collisions are handled with links rather than overflow The two logical pages, q and p, hash to the same location Their corresponding frames are s and r, respectively The book doesn’t give any details on the organization of a hash table on a page In general, if you’re doing division/remainder hashing, you might expect that the divisor is chosen so that the size of a hash table node times the number of possible hash values is less than the size of a whole page 110

111 Clustered page tables The book doesn’t give a very detailed explanation of this The general idea appears to be that memory can be allocated so that these properties hold: Several different (say 16) page id’s, p, will hash to the same entry in the page table This entry will then have no fewer than 16 linked nodes, one for each page, (and possibly more, due to collisions) Honestly, it’s not clear to me what advantage this gives The length of the page table would be reduced by a factor of 16, but it seems that its width would be increased by a factor of 16 I have no more to say about this, and there will be no test questions on it 111

112 Inverted page tables Inverted page tables are an important alternative to multi-level page tables and hashed page tables Recall that with (non-inverted) page tables: 1. The system has to maintain a global frame table that tells which frames are allocated to which processes 112

113 1. The system has to maintain a page table for each process, that makes it possible to look up the physical frame that is allocated to a given logical address Simple illustrations of both of these things are given on the next overhead 113

114 114

115 An inverted page table is an extension of the frame table Instead of many page tables, one for each process, there is one master table The offsets into the table represent the frame id’s for the whole physical memory space The table has two columns, one for pid, and one for a logical page id, p, belong to the process 115

116 116

117 The use of an inverted page table to resolve a logical address is shown in the diagram on the next overhead The key thing to notice about the process is that it is necessary to do linear search through the inverted page table, looking for a match on the pid that generated the address and the logical address that was generated The offset into the table identifies that frame that was allocated to it 117

118 118

119 Searching the inverted page table is the cost of this approach There is no choice except for simple, linear search because the random allocation of frames means that the table entries are not in any order It is not possible to do binary search or anything else 119

120 This is where hashing and inverted page tables come together The way to get direct access to a set of values in random order is to hash Let n be the total number of pages/frames and devise a hashing function that will provide this mapping: f(pid, p)  [0, n – 1] Use this function to allocate frames to processes 120

121 Then when the logical address (pid, p) is generated, hash it In theory, the hash function value itself could be the frame id, f, but you still have to do table look-up because of the possibility of collisions You can go directly to offset f in the table and check there for the key values (pid, p). You don’t have to do linear search If not found, check for overflow or linking until you find the desired values (Note that if you don’t find the desired values, the process has tried to access an address that is out of range.) 121

122 The most recent discussions have left TLB’s behind, but they are still relevant as hardware support for addressing A diagram of the use of a hashed inverted page table with TLB’s is shown on the next overhead In looking at the picture, remember that since the table is stored in memory, that adds an extra memory access to the overall cost of addressing Also note that in reality the table would probably be bigger than a page The table would be stored in system space and might be addressed using a special scheme 122

123 123

124 The previous discussion included the assumption that you could allocate frames based on hashing This simplified things and made the diagram easier to draw In reality, you would have a frame table that recorded which frame was allocated to which frame You would then have a separate hash table that supported look-up into the frame table 124

125 125

126 126

127 Shared pages The basic idea is this: Shared memory between processes can be implemented by mapping their logical addresses to the same physical pages (frames) An operating system may support IPC this way It is also a convenient way to share (read only) data It’s also possible to share code, such as libraries which >1 process need to run 127

128 In order for code to be shareable, it has to be reentrant Reentrant means that there is nothing in the code which causes it to modify itself Consider the MISC sumtenV1.txt example It is divided into a data segment and a code segment Two processes code share the code as long as the accesses to memory variables were mapped to separate copies of the variables 128

129 Every memory access that a program makes has to pass through the O/S This means that the O/S is responsible for incorrect memory access and for detecting when shared code may be being misused Threads are a good, concrete example of shared code We have considered some of the problems that can occur when threads share references to common objects If they share no references, then they are completely trouble free 129

130 Keep in mind that an inverted page table is a global structure that effectively maps one logical page to one physical frame This kind of arrangement makes it difficult to support memory pages (frames) shared between different processes To support shared memory, it would be necessary to add linking to the table or add other data structures to the system 130

131 8.6 Segmentation The idea behind segmentation is that the user view of memory is not simply a linear array of bytes Users tend to think of their applications in terms of program units The relative locations of different modules or classes are not important Each separate unit can be identified by its offset from some base and its length, where the length of each is variable 131

132 Segmentation supports the user view of memory An address is conceptually of the form And address isn’t simply a pure logical address or a page plus offset 132

133 Implementation of segmentation The system would have to support segmented addresses in software It would then be necessary to map from segmented addresses to physical addresses 133

134 Segments may be reminiscent of simple contiguous memory allocation They may also be thought of, very roughly, as (comparatively large) pages of varying size Just like with paging, hardware support in the MMU makes the translation possible The diagram on the next overhead shows how segmented addresses are resolved 134

135 135

136 This is similar to one of the earliest diagrams showing in general how page addresses were resolved The segment table is like a set of base-limit pairs, one for each segment Just like with pages, in the long run you would probably want some sort of TLB support For the time being, segments and pages are treated separately, in real, modern systems with segmentation, the segments are subdivided into pages which are accessed through a paging mechanism 136

137 Protection and sharing with segmentation The theory is that protection and sharing make more logical sense under a segmented scheme Instead of worrying about protection and sharing at a page level, the assumption is that the same protection and sharing decisions would logically apply to a complete segment 137

138 In other words, protection is applied to semantic constructs like “data block” or “program block” Under a segmented scheme, semantically different blocks would be stored in different segments Similarly with sharing If two processes need to share the same block, that the block be stored in a given segment, and give both processes accesses to the segment 138

139 Although perhaps clearer than paged sharing, segmented sharing doesn’t solve all of the problems of sharing If code is shared and two processes access it, the system still has to resolve addresses when processes cross the boundary from unshared to shared code In other words, two processes may know the same code by different symbolic names; potentially, ifs or jumps across boundaries have to be supported (from one address space to another) and the return from shared code has to go to the address space of whichever process called it 139

140 Fragmentation, in the sense that it’s like contiguous memory allocation, suffers from the problem of external fragmentation The difference is that a single process consists of multiple segments and each segment is loaded into contiguous memory The ultimate solution to this problem is to break the segments into pages 140

141 8.7 Example: The Intel Pentium The reality is that the Intel 8086 architecture has had segmented addressing from the beginning. (The Motorola 68000 didn’t.) The following details are given in the same spirit that the information about scheduling and priorities was given in the chapter on scheduling Namely, to show that real systems tend to have many disparate features, and overall they can be somewhat complex 141

142 Some information about Intel addressing The maximum number of segments per process is 16K (2 14 ) Each segment can be as large as 4GB (2 32 ) A page is 4KB (2 12 ), so a segment may consist of up to 2 20 or 1M of pages 142

143 The logical address space of a process is divided into two partitions, each of up to 8K segments Partition 1 is private to the process. Information about its segments are stored in the local descriptor table Partition 2 contains segments shared among processes. Information about these segments is stored in the global descriptor table 143

144 The first part of a logical address is known as a selector It consists of these parts: 13 bits for segment id, s 1 bit for global vs local, g 2 bits for protections (14 bits total for segment id) 144

145 Within each segment, an address is paged It takes two levels to hold the page table The page address takes the form described earlier: 10 bits for outer page of page table 10 bits for inner page of page table 12 bits for offset (At 4 bytes per page table entry, you can fit 2 10 entries into a 4KB page) 145

146 Notice that you’ve got both 14 bits for segment id and 32 bits for segment id This means that in a 32 bit architecture you can’t “use” all of the bits There is a limit on how many segments total you can have, but there is flexibility in where they’re located in memory Take a look at the following diagram and weep 146

147 147

148 The End 148


Download ppt "Chapter 8, Main Memory 1. 8.1 Background When a machine language program executes, it may cause memory address reads or writes From the point of view."

Similar presentations


Ads by Google