Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 CSCI/CMPE 4334 Operating Systems Review: Final Exam.

Similar presentations


Presentation on theme: "1 CSCI/CMPE 4334 Operating Systems Review: Final Exam."— Presentation transcript:

1 1 CSCI/CMPE 4334 Operating Systems Review: Final Exam

2 Review Chapters 1 ~ 12 in your textbook Lecture slides In-class exercises 1~10 (on the course website) Review slides Review for 2 mid-term exams Review for final exam 2

3 Review 5 questions (100 points) + 1 bonus question (20 points) – Process scheduling, concurrent processing – Memory allocation, memory management policy – I/O device management Question types – Q/A 3

4 Time & Place & Event 1:15pm ~ 3pm, May 12 (Tuesday) ENGR 1.272 Closed-book exam 4

5 Basic OS Organization 5 Processor(s)Main MemoryDevices Process, Thread & Resource Manager Memory Manager Device Manager File Manager

6 Chapter 5: Device Management Abstract all devices (and files) to a few interfaces Device management strategies Buffering & Spooling Access time of disk device (see lecture slides and exercises) – FCFS, SSTF, Scan/Look, Circular Scan/Look – See examples in Exercise (2) 6

7 7 Track (Cylinder) Sector (a) Multi-surface Disk(b) Disk Surface(b) Cylinders Cylinder (set of tracks)

8  Several techniques used to try to optimize the time required to service an I/O request  FCFS – first come, first served  Requests are serviced in order received  No optimization  SSTF – shortest seek time first  Find track “closest” to current track  Could prevent distant requests from being serviced 8

9  Scan/Look  Starts at track 0, moving towards last track, servicing requests for tracks as it passes them  Reverses direction when it reaches last track  Look is variant; ceases scan in a direction once last track request in that direction has been handled  Circular Scan/Look  Similar to scan/look but always moves in same direction; doesn’t reverse  Relies on existence of special homing command in device hardware 9

10 Chapter 6: Implementing Processes, Threads, and Resources Modern process – Concurrent processing – Fully utilize the CPU and I/O devices Address space Context switching State diagram 10

11 Process Manager 11 Program Process Abstract Computing Environment File Manager Memory Manager Device Manager Protection Deadlock Synchronization Process Description Process Description CPU Other H/W Scheduler Resource Manager Resource Manager Resource Manager Resource Manager Resource Manager Resource Manager Memory Devices Process Mgr

12 Chapter 7: Scheduling Scheduling methods (see lecture slides and exercises) – First-come, first served (FCFS) – Shorter jobs first (SJF) or Shortest job next (SJN) – Higher priority jobs first – Job with the closest deadline first 12

13 Process Model and Metrics  P will be a set of processes, p 0, p 1,..., p n-1  S(p i ) is the state of p i {running, ready, blocked}  τ(p i ), the service time  The amount of time p i needs to be in the running state before it is completed  W (p i ), the waiting time  The time p i spends in the ready state before its first transition to the running state  T TRnd (p i ), turnaround time  The amount of time between the moment p i first enters the ready state and the moment the process exits the running state for the last time 13

14 Chapter 8: Basic Synchronization Critical sections – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section, called mutual exclusion Requirements for Critical-Section Solutions – Mutual Exclusion – Progress – Bounded Waiting 14

15 Chapter 8: Basic Synchronization (cont'd) Semaphore and its P() and V() functions – Definition – Usage 15

16  Structure of process P i repeat entry section critical section exit section remainder section until false ; 16

17  Mutual Exclusion.  If process P i is executing in its critical section, then no other processes can be executing in their critical sections.  Progress  If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely.  Bounded Waiting  A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. 17

18  Mutual Exclusion.  If process P i is executing in its critical section, then no other processes can be executing in their critical sections.  Progress  If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely.  Bounded Waiting  A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. 18

19  A semaphore, s, is a nonnegative integer variable that can only be changed or tested by these two indivisible (atomic) functions: 19 V(s): [s = s + 1] P(s): [while(s == 0) {wait}; s = s - 1]

20 20 Proc_0() {proc_1() { while(TRUE) { while(TRUE { <compute section>; P(mutex); P(mutex); <critical section>; V(mutex); V(mutex); }} semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0);

21 21 Proc_0() {proc_1() {.../* Enter the CS */ P(mutex); P(mutex); balance += amount; balance -= amount; V(mutex); V(mutex);...} semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0);

22 Chapter 9: High-Level Synchronization Simultaneous semaphore  P simultaneous (S 1,...., S n ) Event – wait() – signal() Monitor – What is condition? Its usage? – What are 3 functions of the condition? 22

23 Chapter 10: Deadlock 4 necessary conditions of deadlock – Mutual exclusion – Hold and wait – No preemption – Circular wait How to deal with deadlock in OS – Prevention – Avoidance – Recovery 23

24 Chapter 11: Memory Management Functions of memory manager Abstraction Allocation Address translation Virtual memory Memory allocation (see lecture slides and exercises) Fixed partition method First-fit Next-fit Best-fit Worst-fit Variable partition method 24

25 Chapter 11: Memory Management (cont'd) Dynamic address space binding Static binding Dynamic binding 25

26 Primary & Secondary Memory 26 CPU Primary Memory (Executable Memory) e.g. RAM Secondary Memory e.g. Disk or Tape CPU can load/store Ctl Unit executes code from this memory Transient storage Access using I/O operations Persistent storage Load Information can be loaded statically or dynamically

27 Functions of Memory Manager Allocate primary memory space to processes Map the process address space into the allocated portion of the primary memory Minimize access times using a cost-effective amount of primary memory May use static or dynamic techniques 27

28 Memory Manager Only a small number of interface functions provided – usually calls to: – Request/release primary memory space – Load programs – Share blocks of memory Provides following – Memory abstraction – Allocation/deallocation of memory – Memory isolation – Memory sharing 28

29 Memory Abstraction Process address space – Allows process to use an abstract set of addresses to reference physical primary memory 29 Mapped to object other than memory Process Address Space Hardware Primary Memory

30  Memory layout / organization  how to divide the memory into blocks for allocation?  Fixed partition method: divide the memory once before any bytes are allocated.  Variable partition method: divide it up as you are allocating the memory.  Memory allocation  select which piece of memory to allocate to a request  Memory organization and memory allocation are close related 30

31 31 Operating System Region 3 Region 2 Region 1 Region 0N0N0 N1N1 N2N2 N3N3 pipi p i needs n i units nini

32  How to satisfy a request of size n from a list of free blocks  First-fit: Allocate the first hole that is big enough  Next-fit: Choose the next block that is large enough  Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole.  Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole. 32

33 33 CPU 0x02010 0x10000 + MAR load R1, 0x02010 0x12010 Program loaded at 0x10000  Relocation Register = 0x10000 Program loaded at 0x04000  Relocation Register = 0x04000 Relocation Register Relative Address We never have to change the load module addresses! Performed automagically by processor

34 34 Image for p j Image for p i Swap p i out Swap p j in

35  A characteristic of programs that is very important to the strategy used by virtual memory systems is spatial reference locality  Refers to the implicit partitioning of code and data segments due to the functioning of the program (portion for initializing data, another for reading input, others for computation, etc.)  Can be used to select which parts of the process should be loaded into primary memory 35

36 Chapter 12: Virtual Memory Address translation between virtual memory and physical memory Terms Page Page frame Paging algorithms Static allocation Dynamic allocation 36

37 Chapter 12: Virtual Memory (cont'd) Fetch policy when a page should be loaded On demand paging Replacement policy (see lecture slides and exercises) which page is unloaded Policies Random Balady’s optimal algorithm* LRU* LFU FIFO LRU approximation Placement policy where page should be loaded 37

38 Chapter 12: Virtual Memory (cont'd) Thrashing Load control 38

39 39 Memory Image for p i Secondary Memory Primary Memory

40  Since binding changes with time, use a dynamic virtual address map,  t 40 Virtual Address Space tt

41 41 Primary Memory Virtual Address Space Supv space User space Paging Disk (Secondary Memory) 2.Lookup (Page i, Addr k) 3.Translate (Page i, Addr k) to (Page Frame j, Addr k) 4.Reference (Page Frame j, Addr k) 1.Reference to Address k in Page i (User space)

42  Virtual memory system transfers “blocks” of the address space to/from primary memory  Fixed size blocks: System-defined pages are moved back and forth between primary and secondary memory  Variable size blocks: Programmer-defined segments – corresponding to logical fragments – are the unit of movement  Paging is the commercially dominant form of virtual memory today 42

43  A page is a fixed size, 2 h, block of virtual addresses  A page frame is a fixed size, 2 h, block of physical memory (the same size as a page)  When a virtual address, x, in page i is referenced by the CPU  If page i is loaded at page frame j, the virtual address is relocated to page frame j  If page is not loaded, the OS interrupts the process and loads the page into a page frame 43

44 44 Page #Line # Virtual Address g bits h bits tt Missing Page Frame #Line # j bitsh bits Physical Address MAR CPU Memory “page table”

45  Two basic types of paging algorithms  Static allocation  Dynamic allocation  Three basic policies in defining any paging algorithm  Fetch policy – when a page should be loaded  Replacement policy –which page is unloaded  Placement policy – where page should be loaded 45

46  Determines when a page should be brought into primary memory  Usually don’t have prior knowledge about what pages will be needed  Majority of paging mechanisms use a demand fetch policy  Page is loaded only when process references it 46

47  When there is no empty page frame in memory, we need to find one to replace  Write it out to the swap area if it has been changed since it was read in from the swap area  Dirty bit or modified bit  pages that have been changed are referred to as “dirty”  these pages must be written out to disk because the disk version is out of date  this is called “cleaning” the page  Which page to remove from memory to make room for a new page  We need a page replacement algorithm 47

48  There are too many processes in memory and no process has enough memory to run. As a result, the page fault is very high and the system spends all of its time handling page fault interrupts. 48

49 Good Luck! Q/A 49


Download ppt "1 CSCI/CMPE 4334 Operating Systems Review: Final Exam."

Similar presentations


Ads by Google