Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 153 Design of Operating Systems Spring 2015 Lecture 11: Scheduling & Deadlock.

Similar presentations


Presentation on theme: "CS 153 Design of Operating Systems Spring 2015 Lecture 11: Scheduling & Deadlock."— Presentation transcript:

1 CS 153 Design of Operating Systems Spring 2015 Lecture 11: Scheduling & Deadlock

2 2 Combining Algorithms l Scheduling algorithms can be combined u Have multiple queues u Use a different algorithm for each queue u Move processes among queues l Example: Multiple-level feedback queues (MLFQ) u Multiple queues representing different job types »Interactive, CPU-bound, batch, system, etc. u Queues have priorities, jobs on same queue scheduled RR u Jobs can move among queues based upon execution history »Feedback: Switch from interactive to CPU-bound behavior

3 Multi-level Feedback Queue (MFQ) l Goals: u Responsiveness u Low overhead u Starvation freedom u Some tasks are high/low priority u Fairness (among equal priority tasks) l Not perfect at any of them! u Used in Linux (and probably Windows, MacOS) 3

4 MFQ l Set of Round Robin queues u Each queue has a separate priority l High priority queues have short time slices u Low priority queues have long time slices l Scheduler picks first task in highest priority queue u If time slice expires, task drops one level 4

5 MFQ 5

6 6 Unix Scheduler l The canonical Unix scheduler uses a MLFQ u 3-4 classes spanning ~170 priority levels »Timesharing: first 60 priorities »System: next 40 priorities »Real-time: next 60 priorities »Interrupt: next 10 (Solaris) l Priority scheduling across queues, RR within a queue u The process with the highest priority always runs u Processes with the same priority are scheduled RR l Processes dynamically change priority u Increases over time if process blocks before end of quantum u Decreases over time if process uses entire quantum

7 7 Motivation of Unix Scheduler l The idea behind the Unix scheduler is to reward interactive processes over CPU hogs l Interactive processes (shell, editor, etc.) typically run using short CPU bursts u They do not finish quantum before waiting for more input l Want to minimize response time u Time from keystroke (putting process on ready queue) to executing keystroke handler (process running) u Don’t want editor to wait until CPU hog finishes quantum l This policy delays execution of CPU-bound jobs u But that’s ok

8 Multiprocessor Scheduling l This is its own topic, we wont go into it in detail u Could come back to it towards the end of the quarter l What would happen if we used MFQ on a multiprocessor? u Contention for scheduler spinlock u Multiple MFQ used – this optimization technique is called distributed locking and is common in concurrent programming l A couple of other considerations u Co-scheduling for parallel programs u Core affinity 8

9 Deadlock! 9

10 10 Deadlock—the deadly embrace! l Synchronization– we can easily shoot ourselves in the foot u Incorrect use of synchronization can block all processes u You have likely been intuitively avoiding this situation already l More generally, processes that allocate multiple resources generate dependencies on those resources u Locks, semaphores, monitors, etc., just represent the resources that they protect u If one process tries to access a resource that a second process holds, and vice-versa, they can never make progress l We call this situation deadlock, and we’ll look at: u Definition and conditions necessary for deadlock u Representation of deadlock conditions u Approaches to dealing with deadlock

11 Deadlock Definition l Resource: any (passive) thing needed by a thread to do its job (CPU, disk space, memory, lock) u Preemptable: can be taken away by OS u Non-preemptable: must leave with thread l Starvation: thread waits indefinitely l Deadlock: circular waiting for resources u Deadlock => starvation, but not vice versa lockA->Acquire(); … lockB->Acquire(); … lockA->Acquire(); Process 1Process 2 11

12 12 Deadlock Definition l Deadlock is a problem that can arise: u When processes compete for access to limited resources u When processes are incorrectly synchronized l Definition: u Deadlock exists among a set of processes if every process is waiting for an event that can be caused only by another process in the set lockA->Acquire(); … lockB->Acquire(); … lockA->Acquire(); Process 1Process 2

13 13 Deadlock and Resources l There are two kinds of resources: consumable and reusable u Consumable resources are generated and destroyed by processes: e.g., a process waiting for a message from another process u Reusable resources are allocated and released by processes: e.g., locks on files l Deadlock with consumable resources is usually treated as a correctness issue (e.g., proofs) or with timeouts l From here on, we only consider reusable resources

14 Dining Philosophers Each lawyer needs two chopsticks to eat. Each grabs chopstick on the right first. 14

15 Yet another Example 15

16 16 Conditions for Deadlock l Deadlock can exist if and only if the following four conditions hold simultaneously: 1. Mutual exclusion – At least one resource must be held in a non-sharable mode 2. Hold and wait – There must be one process holding one resource and waiting for another resource 3. No preemption – Resources cannot be preempted (critical sections cannot be aborted externally) 4. Circular wait – There must exist a set of processes [P 1, P 2, P 3,…,P n ] such that P 1 is waiting for P 2, P 2 for P 3, etc.

17 Circular Waiting 17

18 Next Class l Deadlock continued 18


Download ppt "CS 153 Design of Operating Systems Spring 2015 Lecture 11: Scheduling & Deadlock."

Similar presentations


Ads by Google