Presentation is loading. Please wait.

Presentation is loading. Please wait.

Operating Systems Midterm Preparation Summary DEEDS Group Important Note: These slides are only from a selected set of topics for midterm review. The exam.

Similar presentations

Presentation on theme: "Operating Systems Midterm Preparation Summary DEEDS Group Important Note: These slides are only from a selected set of topics for midterm review. The exam."— Presentation transcript:

1 Operating Systems Midterm Preparation Summary DEEDS Group Important Note: These slides are only from a selected set of topics for midterm review. The exam coverage is all the slides/topics covered in the class lectures, exercises and labs.

2 Processes Process: program in execution  PC  Registers  Process Stack (function params, return addresses, local variables)  Heap (many include) Process  Active entity With a PC specifying the next instruction to execute and a set of associated resources. Program  Passive entity File containing a list of instructions

3 Process States new: process is being created. running: instructions are being executed. waiting: waiting for (I/O completion or reception of a signal) event ready: waiting to be assigned to a processor terminated: finished execution

4 Process Control Block (PCB) Process state Program counter CPU registers CPU scheduling information Memory management information Accounting information I/O Status Information

5 Process Scheduling Objective of multiprogramming: some process running at all times Objective of time sharing: switch the CPU among processes To meet these objectives, the process scheduler selects an available process for program execution on the CPU. Scheduling queues:  Ready queue  Device queue Selection of a process => Scheduler  Long-term  Short-term

6 Context Switch When interrupt occurs, the system needs to save the current context of the process currently running on the CPU “Context”  Value of the CPU registers  Process state  Memory management information Context switch  State save of the current process  State restore of a different process

7 Process Creation A process may create several new processes via a create-process system call  Creating process: parent process  New process: children Process identifier (pid) When the process is created, initialization data may be passed along by the parent process to the child process.

8 Process Creation & Termination Two possibilities exist in terms of creation  Parent executes concurrently with its children  Parent waits until some or all of its children have terminated Two possibilities exist in terms of the address space of the new process  Child process is a duplicate of the parent process (same program and data as parent)  Child process has a new program loaded into it Process Termination  All the resources of the process (physical and virtual memory, open files, and I/O buffers) are deallocated by the operating system.

9 Interprocess Communication Cooperating or independent processes Cooperating processes require an interprocess communication (IPC) mechanism Two fundemental models of interprocess communication  Shared memory  Message passing

10 Interprocess Communication Message PassingShared Memory

11 Threads Threads: Basic unit of CPU utilization  Thread id  PC  Register set  Stack Shares  Code section  Data section  Other OS resources

12 Threads Benefits of Multithreaded programming  Responsiveness Allows a program to continue even if parts of it are blocked Ex: Tabs in Firefox, Opera, Text/Image Web server streams etc.  Resource Sharing (but also less protection!) Threads share memory and process resources Allows an application to perform several different activities within the same address space  Efficiency/Performance More economical to context-switch threads than processes Solaris: 30-100 times faster thread creation vs. process creation; context switch 5 times faster for threads vs. processes  Utilization of multiprocessor architectures Worker threads dispatching to different processors

13 Multithreading Models Support for threads  User threads Supported above the kernel Managed without kernel support  Kernel threads Supported and managed by OS Three common way of establishing a relationship between user and kernel threads  Many-to-one model  One-to-one model  Many-to-many model

14 1. Many-to-one Model Maps many user-level threads to one kernel thread. Thread management is done in user space => + Entire process will block if a thread makes a blocking system call => - Multiple threads are unable to run in parallel on multiprocessors (one thread can access the kernel at a time) Examples:  Green threads in Solaris  GNU portable threads

15 2. One-to-one model Maps each user thread to a kernel thread Provides more concurrency than the many-to-one model Allows multiple threads to run in parallel on multiprocessors Drawback: Creating a user thread requires creating the corresponding kernel thread Examples:  Windows  Linux  Solaris 9 and newer

16 3. Many-to-one model Multiplexes many user level threads to a smaller or equal number of kernel threads. Lesser concurrency than one-to-one but easier scheduler and different kernel threads for different server types Developers can create as many user threads as necessary and the corresponding kernel threads can run in parallel on a multiprocessor When user-thread blocks, kernel schedules another for execution Examples:  Solaris prior to v9  Windows NT family with the ThreadFiber package

17 4. Two-level model Popular variation on the many-to-many model, except that it also allows a user thread to be bound to a kernel thread Examples:  HP-UX  64-bit Unix  Solaris 8 and earlier

18 Memory Management Where in the memory should we place our programs?  Limited amount of memory!  More than one program!  Programs have different sizes!  Program size might grow (or shrink)! Memory allocation changes as  Processes come into memory  Leave memory Swapping

19 Virtual Memory Separates  Virtual (logical) addresses  Physical addresses Requires a translation at run time  Virtual  Physical  Handled in HW (MMU)

20 Paging

21 The relation between virtual addresses and physical memory addresses given by page table One page table per process is needed Page table needs to be reloaded at context switch

22 Paging Every memory lookup  Find the page in the page table  Find the (physical) memory location Now we have two memory accesses (per reference) Solution: Translation Lookaside Buffer (TLB)  (again a cache…)

23 TLBs – Translation Lookaside Buffers

24 Memory lookup Look for page in TLB (fast)  If hit, fine go ahead!  If miss, find it and put it in the TLB Find the page in the page table (hit) Reload the page from disk (miss) What if the physical memory is full?

25 Page Replacement Algorithms Optimal Page Replacement Algorithm  Replace page needed at the farthest point in future Optimal but unrealizable Not Recently Used  Each page has Reference bit, Modified bit Bits are set by HW when page is referenced, modified Reference bit is periodically unset (at clock ticks)  Pages are classified Class 0: not referenced, not modified Class 1: not referenced, modified Class 2: referenced, not modified Class 3: referenced, modified  NRU removes page at random from lowest numbered non empty class

26 Page Replacement Algorithms FIFO Page Replacement Algorithm  Maintain a linked list of all pages In the order they came into memory  Page at beginning of list replaced  Disadvantage Page in memory the longest may be used often Second Chance Page Replacement Algorithm  Pages sorted in FIFO order  Inspect the R bit, give the page a second chance if R=1

27 Page Replacement Algorithms The Clock Page Replacement Algorithm

28 Page Replacement Algorithms Least Recently Used (LRU)  Locality: pages used recently will be used soon Throw out the page that has been unused longest  Keep a linked list of pages or a counter Not Frequently Used (NFU) - Simulating LRU in Software  A counter is associated with each page  At each clock interrupt add R to the counter

29 Small Modification to NFU: Aging 1) The counters are each shifted right 1 bit before the R bit is added in 2) R bit is added to the leftmost rather than the rightmost bit.

30 Segmentation One-dimensional address space with growing tables One table may bump into another

31 Segmentation


33 I/O Goals for I/O Handling  Enable use of peripheral devices  Present a uniform interface for Users (files etc.) Devices (respective drivers)  Hide the details of devices from users (and OS)

34 I/O Most device controllers provide  buffers (in / out)  control registers  status registers These are accessed from the OS/Apps  I/O ports  memory-mapped  hybrid

35 Direct Memory Address (DMA)

36 I/O Handling Three kinds of I/O handling  Programmed I/O  Interrupt-driven I/O  DMA-based I/O

37 Programmed I/O

38 Interrupt-driven I/O Code for system callCode for interrupt handler

39 I/O Using DMA Printing a string using DMA a) code executed when the print system call is made b) interrupt service procedure

40 Deadlock A set of processes each holding a resource and waiting to acquire a resource held by another. Deadlock  None of the processes can …run, release resources or be awakened B P1P1 A P2P2 has needs has

41 Deadlock Modeling  process A holding resource R  process B is waiting (requesting) for resource S  process C and D are in deadlock over resources T and U C has U, wants T D has T, wants U has wants has wants has

42 Deadlock Detection 1. Detection with One Resource of Each Type T holds wants Develop resource ownership and requests graph If a cycle can be found within the graph  deadlock

43 Deadlock Detection 2. Detection with Multiple Resource of Each Type


45 Deadlock Avoidance Safe and Unsafe States (a) (b) (c) (d) (e) Safe: If there is a scheduling order that satisfies all processes even if they request their maximum resources at the same time * Keep in mind that only 1 process can execute at a given time! Available Resources = 10

46 Safe and Unsafe States Note: This is not a deadlock – just that the “potential” for a deadlock exists IF A or C ask for the max. If they ask for { "@context": "", "@type": "ImageObject", "contentUrl": "", "name": "Safe and Unsafe States Note: This is not a deadlock – just that the potential for a deadlock exists IF A or C ask for the max.", "description": "If they ask for

47 Banker's (State) Algorithm for a Single Resource

48 Banker’s Algorithm for Multiple Resources

49 Scheduling Task/Process/Thread Types  Non pre-emptive (NP): An ongoing task cannot be displaced  Pre-emptive: Ongoing tasks can be switched in/out as needed Scheduling Algorithms  First Come, First Served (FCFS)  Shortest Job First (SJF)  Round Robin (RR)  Priority Based (PB)

50 First-Come, First-Served (FCFS) Scheduling (Non-Preemptive) Process Length (CPU Burst Time) P 1 24 P 2 3 P 3 3 Processes arrive (& get executed) in their arrival order: P 1, P 2, P 3 Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 P1P1 P2P2 P3P3 2427300

51 Shortest-Job-First (SJF) Scheduling (Non-preemptive) ProcessArrival TimeBurst Time P 1 0.0 7 P 2 2.0 4 P 3 4.0 1 P 4 5.0 4 SJF (non-preemptive) P 1, then P 3 then P 2, P 4 Av. waiting = (0 + [8-2] + [7-4] + [12-5])/4 = 4.00 P1P1 P3P3 P2P2 73160 P4P4 812

52 Shortest-Job-First (SJF) Scheduling (Preemptive) Process Arrival TimeBurst Time P 1 0.0 7 P 2 2.0 4 P 3 4.0 1 P 4 5.0 4 SJF (preemptive) Average waiting time = P1:[0, (11-2)]; P2:[0, (5-4)]; P3: 0; P4: (7-5) Average waiting time = (9 + 1 + 0 +2)/4 = 3 …[6.75;4.00] P1P1 P3P3 P2P2 42 11 0 P4P4 57 P2P2 P1P1 16 P1: 5 leftP2: 2 left P2:2, P4:4, P1:5 P1P1 P3P3 P2P2 42 11 0 P4P4 57 P2P2 P1P1 16 P1: 5 leftP2: 2 left P2:2, P4:4, P1:5 P1P1 P3P3 P2P2 42 11 0 P4P4 57 P2P2 P1P1 16 P1: 5 leftP2: 2 left P2:2, P4:4, P1:5 P1: 5 leftP2: 2 left P2:2, P4:4, P1:5 P1: 5 leftP2: 2 left

53 Round Robin Each process gets a fixed slice of CPU time (time quantum: q), 10-100ms  After this time has elapsed, the process is “forcefully” preempted and added to the end of the ready queue.  If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Performance?  q too large  FIFO (poor response time & poor CPU utilization)  q too small  q must be large with respect to context switch overhead, else context switching overhead reduces efficiency ReadyQ a aa bbb c c

54 Dynamic RR with Quantum = 20 ProcessBurst Time P 1 53 P 2 17 P 3 68 P 4 24 Typically, higher average turnaround than SJF, but better response  Turnaround time – amount of time to execute a particular process (minimize)  Response time (min) – amount of time it takes from request submission until first response P1P1 P2P2 P3P3 P4P4 P1P1 P3P3 P4P4 P1P1 P3P3 P3P3 02037577797117121134154162

55 Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to process with the highest priority  Preemptive (priority and order changes possible at runtime)  Non-preemptive (initial execution order chosen by static priorities) Problem  Starvation: low priority processes may never execute Solution  Aging: as time progresses increase priority of process

56 Race Condition Shared memory, shared files, shared address spaces, so how do we prohibit more than 1 process from accessing shared data at the same time and providing ordered access? Concurrent access to shared data (CS) by >1 processes outcome “can” depend solely on the order of accesses as seen by the resource instead of the proper request order:  Race Condition

57 Mutual Exclusion ME Solution Basis  Exclusive CS Access

58 ME: Lock Variables  flag(lock) as global variable for access to shared section  lock = 0 ; resource_free  lock = 1 ; resource_in_use  Check lock; if free (0); set lock to 1 and then access CS  A reads lock(0); initiates set_to_1  B comes in before lock(1) finished; sees lock(0), sets lock(1)  Both A and B have access rights  race condition  Happening as “locking” (the global var.) is not an atomic action “Atomic”: All sub-actions finish for action to finish or nothing (All or Nothing) unlocked?  acquire lock enter CS done  release lock do non-CS

59 ME with “Busy Waiting” A sees turn=0; enters CSB sees turn=0; busy_waits (CPU waste) A exits CS, sets turn =1B sees turn=1; enters CS B finishes CS; sets turn=0; A enters CS; finishes CS quickly; sets turn=1; B in non-CS, A in non-CS; A finishes non-CS & wants CS; BUT turn=1  A waits (Condition 3 of ME? Process seeking CS should not be blocked by a process not using CS! But, no race condition given the strict alternation!)

60 Petersons ME int turn; “turn” to enter CS boolean flag[2]; TRUE indicates ready (access) to enter CS do { flag[i] = TRUE; turn = j; set access for next_CS access while (flag[j] && turn = = j); CS only if flag[j]=FALSE or turn = i CS flag[i] = FALSE; non-CS } while(TRUE);

61 Sleep and Wakeup Shared fixed-size buffer - Producer puts info IN - Consumer takes info OUT

62 Semaphores synchronization  | mutual exclusion 

Download ppt "Operating Systems Midterm Preparation Summary DEEDS Group Important Note: These slides are only from a selected set of topics for midterm review. The exam."

Similar presentations

Ads by Google