Presentation is loading. Please wait.

Presentation is loading. Please wait.

OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S

Similar presentations


Presentation on theme: "OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S"— Presentation transcript:

1 OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S
OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL Chapter 4 Memory Management Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

2 Monoprogramming without Swapping or Paging
Just run one program at a time, overwriting every time to change the executing program. The OS receives a command, loads its program into memory, and runs it. a) old-style mainframe b) common in PDAs and embedded systems c) used by early PCs, e.g. BIOS device drivers in ROM Figure 4-1. Three simple ways of organizing memory with an operating system and one user process. Other possibilities also exist. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

3 Multiprogramming with Fixed Partitions (1)
Figure 4-2. (a) Fixed memory partitions with separate input queues for each partition. Different processes have different requirements, so allocate different size partitions Queue up processes for the smallest partitions that they can fit in Disadvantage: partitions are unused when they could satisfy a smaller process Look at the queue for Partition 1: all the small jobs queuing up when they could use Partition 3. Note that each partition starts its memory space in a relative fashion Relocation (process can be loaded into any slot) and protection (processes must be protected from one another). Base and limit registers help to solve both problems. Overhead to compute for every memory reference. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

4 Multiprogramming with Fixed Partitions (2)
Figure 4-2. (b) Fixed memory partitions with a single input queue. Single queue removes the disadvantage of “efficient” multiple queues Can search the queue to find the largest job that fits, but this discriminates against small jobs. Solution? Reserve a small slot available for small jobs. Or make sure a job can only be skipped over k times after which it must be run Moral: more decisions means more processing means less efficient Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

5 Swapping (1) Swapping means not enough room for all processes, so some have to stay on disk, and get swapped in when it’s their turn Swapping brings in the entire process (different from paging and virtual memory - where parts of processes are swapped in) This happens dynamically (more overhead in mgmt than fixed partitions) And addresses must be relocated because a process can be swapped in to any free slot. Protection must also be provided (base & limit!) Swapping creates holes (memory fragmentation) Figure 4-3. Memory allocation changes as processes come into memory and leave it. The shaded regions are unused memory. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

6 Swapping (2) Figure 4-4. (a) Allocating space for a growing data
segment. Processes tend to grow, especially when their programmer doesn’t manage memory very well! So, allocate a bit extra to allow room to grow. If it fills the partition, it must be moved to a bigger hole, swapped out and then back into a larger partition, or even killed! Only swap out what’s actually used, not the entire partition (if it’s not used) Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

7 Swapping (3) Figure 4-4. (b) Allocating space for a growing stack
and a growing data segment. Processes can grow in two ways: heap (data) and stack Allow the stack (local variables and return addresses for procedure calls) to grow down, and the data (heap) to grow up. When they run into each other, the process must be allocated to a larger hole Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

8 Memory Management with Bitmaps
Divide memory up into allocation units - a byte, a word, words, kilobyte(s) Each bit in the map corresponds to an allocation unit: is it free or used? The smaller the allocation unit, the larger the map. Example: 4 byte allocation unit means 32 bits of memory require 1 bit in the memory map Pretty simple, but the main con is that you have to find contiguous free bits in the map - problematic if the free bits straddle word boundaries Easier to use linked lists because adjacent nodes can be modified to allow for freeing up holes. The list holds the same info as the bitmap, sorted by address, but is much easier to search (just requires more memory) Figure 4-5. (a) A part of memory with five processes and three holes. The tick marks show the memory allocation units. The shaded regions (0 in the bitmap) are free. (b) The corresponding bitmap. (c) The same information as a list. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

9 Memory Management with Linked Lists
- Use doubly linked lists (why?) keep the list sorted by address Walk through this example with a linked list Figure 4-6. Four neighbor combinations for the terminating process, X. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

10 Memory Allocation Algorithms
First fit Use first hole big enough Next fit Use next hole big enough Best fit Search list for smallest hole big enough Worst fit Search list for largest hole available Quick fit Separate lists of commonly requested sizes With a linked list, one can apply a variety of algorithms to allocate space to processes Next fit keeps track of where it left off (turns out it’s less efficient - why?) Look at figure 4-5 to see how each algorithm works with a new job that needs 1 unit, 2 units, etc. 1st fit generates larger holes making it easier to fill them with processes Can quicken up all of these with two separate lists and focus on holes Can then sort on hole size, and the holes themselves can be used (instead of auxiliary structures) Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

11 Paging (1) Virtual memory - keeping bits of a process on disk, and paging it in and out as needed Virtual addresses go to an MMU to map a virtual address to a physical address If a machine doesn’t support virtual memory, the two addresses are the same Figure 4-7. The position and function of the MMU. Here the MMU is shown as being a part of the CPU chip because it commonly is nowadays. However, logically it could be a separate chip and was in years gone by. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

12 Paging (2) Figure 4-8. The relation between
virtual addresses and physical memory addresses is given by the page table. The computer supports 16-bit addresses, or 64K (2^6 * 2^10), but only has 32KB of actual memory. Virtual space is split up into pages, which correspond to page frames in physical memory, e.g. 4KB (size must be the same: pages and page frames) Example: program references location 0. This is a virtual address on page 0, which is actually on page frame 2, so the MMU adds 8K to the virtual address. Note that the frames marked by an X aren’t in actual memory. MMU keeps track of them with a present/absent bit. When an X address is accessed, a page fault occurs. A little used page frame is written back to disk, the virtual page is copied in and the MMU updates its tables. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

13 Paging (3) Figure 4-9. The internal operation of the MMU
with 16 4-KB pages. There are fewer page frames than virtual pages. A reference to an absent virtual page causes a shuffle. A present/absent bit helps us keep track of this. If the MMU receives a request for an absent page (page fault), it moves the page in to a page frame (possibly dislodging one already there) and updates the table. Example, virtual address Convert to binary. 16 bits for the virtual address gives us frame (4 bits) and offset (12 bits access a 4k page). 3 bits identify the page frame, and 1 present/absent bit. Why? Only 8 page frames. So the 3 bit page frame + 12 bit offset gives a 15 bit address, enough for 32K. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

14 Page Tables Purpose : map virtual pages onto page frames
Major issues to be faced The page table can be extremely large The mapping must be fast. The page table is a function: physical page frame = page table(virtual page) Most modern chips use 32 bits. 64 and even 128-bit addresses will soon be common. So with a 4K page size, the page table gets too large. And each process has its own page table! This MMU page table lookup occurs for every memory reference! (Profile that!) Two extreme solutions: - page table in registers (expensive context switch) - Page table in main memory pointed to be a single register (quick context switch) Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

15 Multilevel Page Tables
Book example: 0x 32 bits = 10 (PT1) + 10 (PT2) + 12 (4K pages) This approach avoids having to keep the entire page table in memory (remember it can get quite large) First table has 1K entries (2^10) - each represents a 4M chunk (why?) How many second level tables are there? 1024! (2^10) How big is each second level table? Also 1024! (2^10) Each level 2 entry has a present/absent bit and identifies a page frame Use the last 12 bits as the offset into the page frame Figure (a) A 32-bit address with two page table fields. (b) Two-level page tables. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

16 Structure of a Page Table Entry
Of course, different machines will have different structures, but these are the common fields: Page frame number (you know that already) Present/absent (that, too! 0 causes a page fault) Protection (rwx) Modified (a dirty frame must be written out to disk, an unmodified frame can be discarded) Referenced (helps OS decided to evict the page table entry or not) Caching disabled (if it maps onto device register, not memory) Figure A typical page table entry. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

17 TLBs—Translation Lookaside Buffers
Accessing memory takes time - especially when page tables are in memory to access memory! A lot of computation to get an address! Locality of reference: most programs access a small amount of memory a lot of times, so take advantage of this by using a form of memory cache This example: a loop spans 19, 20 & 21; data on 129 & 130, 140 has indices and 860 and 861 hold the stack. MMU looks in the TLB first (a small piece of dedicated hardware). If it’s there, great. If not, replaces one of the entries with the results of the page table lookup. Figure A TLB to speed up paging. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

18 Figure 4-13. Comparison of a traditional page table with an
Inverted Page Tables Consider a 64-bit virtual address space with 4K pages (2^12) that leaves 2^52 entries in the page table. If each entry is 8 bytes, that’s > 30 million gigabytes! An inverted page table has one entry per page frame in real memory (not one entry per virtual page) 64-bit address with 256MB RAM & 4K page size yields 64k entries Virtual to physical translation takes more effort: have to search the whole table for the virtual page Use a TLB for quick access to heavily referenced pages. If there’s a miss, use a hash table to search the inverted page table. Make the hash table as long as the number of real page frames and it should be a quick look up. Figure Comparison of a traditional page table with an inverted page table. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

19 Page Replacement Algorithms
Optimal replacement Not recently used (NRU) replacement First-in, first-out (FIFO) replacement Second chance replacement Clock page replacement Least recently used (LRU) replacement The approach is similar for web pages, but one difference: web pages are not modified by the cache and written back to the web server. Memory pages are. Optimal replacement: impossible to implement. Prolong the pain as long as possible. Can use in hindsight (2nd run) to evaluate other algorithms. NRU use the R(eference) and M(odified) bits: 4 classes: !R!M, !RM, R!M, RM and randomly removes a page from the lowest numbered nonempty class. Note that R is periodically zeroed by the system to distinguish pages more recently referenced. Ones remain ones. FIFO: might remove something that’s still needed Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

20 Second Chance Replacement
Simple modification to FIFO using the R bit of the oldest page. If the oldest has R bit set, clear it and make like it’s a newly referenced page 2nd chance looks for an old page that has not been referenced If all pages have R bit set, they all get cleared, reset, and then the oldest one gets evicted! Figure Operation of second chance. (a) Pages sorted in FIFO order. (b) Page list if a page fault occurs at time 20 and A has its R bit set. The numbers above the pages are their loading times. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

21 Clock Page Replacement
Instead of shuffling around list members, rotate a pointer within a circular list Same as 2nd chance except for implementation. Figure The clock page replacement algorithm. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

22 Simulating LRU in Software (1)
Evict the page that has been unused for the longest time Must keep a linked list of all pages sorted by access time, so every memory access means changing that list 1st find the page, delete it and then put it at the front, so that the end has the least recently used. Another way to do this is to stick a clock tick field into the page table entry and search the table for the lowest entry. Otherwise, try this funky algorithm: Initialise an nxn matrix to all 0's. An access to page frame k: set all bits of row k to 1, then clear all the bits in colum k. Row values id the LRU – the LRU always has the lowest value. Do this in hardware? Are you kidding? Figure LRU using a matrix when pages are referenced in the order 0, 1, 2, 3, 2, 1, 0, 3, 2, 3. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

23 Simulating LRU in Software (2)
Give each page frame a counter initialized to 0. Periodically add the R bit to the counter. A bigger number means more frequently used. The lowest numbers are Not Frequently Used (NFU). Unfortunately, NFU never forgets if a page is referenced, even if a long time ago. Two modifications to achieve aging: shift each counter right 1 bit before adding the R bit add the R bit to the leftmost bit, rather than the rightmost bit How is NFU different from LRU? LRU has finer granularity. NFU's granularity is a clock tick. Also NFU's aging is limited by its bit field. Figure The aging algorithm simulates LRU in software. Shown are six pages for five clock ticks. The five clock ticks are represented by (a) to (e). Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

24 The Working Set Model Demand paging - only bring in pages as they are needed, not in advance Locality of reference gives rise to a working set (of pages). If the working set can fit in memory, you’re good to go. Otherwise, it’s thrash-ville. One way to speed things up is to keep track of that working set, and swap in/out the whole working set. This is prepaging, instead of demand paging. The reason the curve limits is because a process has only so many pages The aging algorithm identifies the working set by the high order bits the clock algorithm can be modified to query the working set first, then the R bit k Figure The working set is the set of pages used by the k most recent memory references. The function w(k, t) is the size of the working set at time t. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

25 Local versus Global Allocation Policies
How to allocate memory between competing processes? Consider the three processes in (a). If A page faults, do we page out one of A’s pages, or one of the other’s? With local page replacement, A6 gets evicted. With global page replacement, B6 gets evicted. global works better because local partitions memory per process. Working sets change size over time: too big, local thrashes, too small local wastes slots. Be fair and allocate fixed number of slots? Shouldn’t the number of pages per process depend on its size? Figure Local versus global page replacement. (a) Original configuration. (b) Local page replacement. (c) Global page replacement. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

26 Page Fault Frequency Figure 4-20. Page fault rate as a function of the
With global allocation, page fault frequency can help determine how big a process’ allocation set (# of pages) should grow or shrink - but not which page to evict. Of course, page faults will decrease the more pages are allowed to reside in memory. We can see this clearly in the figure. Try to keep the page fault rate between the dashed lines: don’t give it too many frames (B) But don’t give it too few, either, otherwise it’ll thrash (A) At some point, you gotta swap (the process out), and not just page. Figure Page fault rate as a function of the number of page frames assigned. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

27 Segmentation (1) Examples of tables saved by a compiler …
The source text being saved for the printed listing (on batch systems). The symbol table, containing the names and attributes of variables. The table containing all the integer and floating-point constants used. The parse tree, containing the syntactic analysis of the program. The stack used for procedure calls within the compiler. These will vary in size dynamically during the compile process Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

28 Segmentation (2) An argument for having more than one virtual address space. If a process’ memory requirements can be nicely grouped and characterised, then multiple virtual memory partitions might work better because they wouldn’t necessarily collide with one another. Figure In a one-dimensional address space with growing tables, one table may bump into another. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

29 Segmentation (3) Remember that a segment is a logical entity, to ease the programmer’s burden Each one is an independent address space, and can grow and shrink independently Address is two parts: segment number and offset into that address space linking is simplified because all addresses effectively start at 0 Facilitates shared libraries because segments can be shared different segments can have different forms of protection, e.g. rwx because the programmer knows what each segment is for (easier to catch bugs) NB: programmer is not aware of paging, but is aware of segmentation Figure A segmented memory allows each table to grow or shrink independently of the other tables. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

30 Figure 4-23. Comparison of paging and segmentation.
remember, user is not aware of paging but is aware of logical segments . . . Figure Comparison of paging and segmentation. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

31 Figure 4-23. Comparison of paging and segmentation.
. . . Figure Comparison of paging and segmentation. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

32 Implementation of Pure Segmentation
Pages are fixed size, segments are not. As segments get evicted, they leave holes: checkerboarding, fragmentation Compaction removes the holes and creates a contiguous hole Figure (a)-(d) Development of checkerboarding. (e) Removal of the checkerboarding by compaction. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

33 Segmentation with Paging: The Intel Pentium (1)
Pentium supports 16K segments, each with 2^32 virtual address space Pentium supports segmentation, paging, or both, but most OS’s only use pure paging with a single segment (only OS/2 used the full capabilities) LDT: Local Descriptor Table, each program has one of these GDT: Global Descriptor Table, this is shared by all programs A selector identifies the index into the respective table to obtain the segment The index is 13 bits, so how many descriptors are in each table? The selector is in a Code (CS) or Data (DS) segment register and the descriptor (from the GDT or LDT) gets put in another register The selector is copied, the 3 lower order bits zeroed, and then the index is added to the table to get direct access to the table entry, e.g. selector 72 is entry 9 in the GDT, which is at address GDT + 72. Figure A Pentium selector. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

34 Segmentation with Paging: The Intel Pentium (2)
All this complication is to handle: pure paging Pure segmentation Paged segments And backward compatibility with the 286 Figure Pentium code segment descriptor. Data segments differ slightly. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

35 Segmentation with Paging: The Intel Pentium (3)
Figure Conversion of a (selector, offset) pair to a linear address. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

36 Segmentation with Paging: The Intel Pentium (4)
Figure Mapping of a linear address onto a physical address. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

37 Segmentation with Paging: The Intel Pentium (6)
Access to a higher level is permitted Access to a lower level is illegal and causes a trap Calling procedures (higher or lower) is allowed, but controlled to use official entry points (the selector gives the specific address of the procedure call) Figure Protection on the Pentium. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

38 Figure 4-30. Memory allocation (a) Originally. (b) After a fork.
Memory Layout (1) MINIX doesn't page or swap! Why? 1. keep the system easy to understand 2. the Intel 8088 (no MMU) 3. making MINIX easy to port Memory management is handled by the process manager (PM) – it runs in user space note that the newly forked space is released before the exec'ed space gets allocated to prevent creating more holes. I&D = Instruction and Data discuss shared vs combined I&D space: text (I) can be shared and complicates things a bit upon fork and termination Figure Memory allocation (a) Originally. (b) After a fork. (c) After the child does an exec. The shaded regions are unused memory. The process is a common I&D one. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

39 Memory Layout (2) Figure (a) A program as stored in a disk file. (b) Internal memory layout for a single process. In both parts of the figure the lowest disk or memory address is at the bottom and the highest address is at the top. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

40 Process Manager Data Structures and Algorithms (1)
. . . Figure The message types, input parameters, and reply values used for communicating with the PM. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

41 Process Manager Data Structures and Algorithms (2)
. . . Figure The message types, input parameters, and reply values used for communicating with the PM. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

42 Processes in Memory (1) Figure (a) A process in memory. (b) Its memory representation for combined I and D space. (c) Its memory representation for separate I and D space Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

43 Processes in Memory (2) Figure (a) The memory map of a separate I and D space process, as in the previous figure. (b) The layout in memory after a second process starts, executing the same program image with shared text. (c) The memory map of the second process. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

44 Figure 4-35. The hole list is an array of struct hole.
Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

45 Figure 4-36. The steps required to carry out the fork system call.
Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

46 Figure 4-37. The steps required to carry out the exec system call.
Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

47 EXEC System Call (2) Figure (a) The arrays passed to execve. (b) The stack built by execve. (c) The stack after relocation by the PM. (d) The stack as it appears to main at start of execution. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

48 EXEC System Call (3) Figure 4-39. The key part of crtso,
the C run-time, start-off routine. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

49 Figure 4-40. Three phases of dealing with signals.
Signal Handling (1) Figure Three phases of dealing with signals. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

50 Figure 4-41. The sigaction structure.
Signal Handling (2) Figure The sigaction structure. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

51 Signal Handling (3) Figure Signals defined by POSIX and MINIX 3. Signals indicated by (*) depend on hardware support. Signals marked (M) not defined by POSIX, but are defined by MINIX 3 for compatibility with older programs. Signals kernel are MINIX 3 specific signals generated by the kernel, and used to inform system processes about system events. Several obsolete names and synonyms are not listed here. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

52 Signal Handling (4) Figure Signals defined by POSIX and MINIX 3. Signals indicated by (*) depend on hardware support. Signals marked (M) not defined by POSIX, but are defined by MINIX 3 for compatibility with older programs. Signals kernel are MINIX 3 specific signals generated by the kernel, and used to inform system processes about system events. Several obsolete names and synonyms are not listed here. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

53 Signal Handling (5) Figure A process’ stack (above) and its stackframe in the process table (below) corresponding to phases in handling a signal. (a) State as process is taken out of execution. (b) State as handler begins execution. (c) State while sigreturn is executing. (d) State after sigreturn completes execution. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

54 Initialization of Process Manager
Figure Boot monitor display of memory usage of first few system image components. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

55 Implementation of EXIT
(b) Figure (a) The situation as process 12 is about to exit. (b) The situation after it has exited. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

56 Implementation of EXEC
Figure (a) Arrays passed to execve and the stack created when a script is executed. (b) After processing by patch_stack, the arrays and the stack look like this. The script name is passed to the program which interprets the script. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

57 Figure 4-47. System calls relating to signals.
Signal Handling (1) Figure System calls relating to signals. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

58 Signal Handling (2) Figure Messages for an alarm. The most important are: (1) User does alarm. (4) After the set time has elapsed, the signal arrives. (7) Handler terminates with call to sigreturn. See text for details. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

59 Signal Handling (3) Figure The sigcontext and sigframe structures pushed on the stack to prepare for a signal handler. The processor registers are a copy of the stackframe used during a context switch. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

60 Figure 4-50. Three system calls involving time.
Other System Calls (1) Figure Three system calls involving time. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

61 Figure 4-51. The system calls supported in servers/pm/getset.c.
Other System Calls (2) Figure The system calls supported in servers/pm/getset.c. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

62 Other System Calls (3) Figure Special-purpose MINIX 3 system calls in servers/pm/misc.c. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

63 Figure 4-53. Debugging commands supported by servers/pm/trace.c.
Other System Calls (4) Figure Debugging commands supported by servers/pm/trace.c. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved

64 Memory Management Utilities
Three entry points of alloc.c alloc_mem – request a block of memory of given size free_mem – return memory that is no longer needed mem_init – initialize free list when PM starts running Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall, Inc. All rights reserved


Download ppt "OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S"

Similar presentations


Ads by Google