Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Overview Processes, Thread Management, Basic IPC Memory Management Resource Sharing & Inter-Process Interactions  Issues and Problems (Deadlocks, CS,

Similar presentations


Presentation on theme: "1 Overview Processes, Thread Management, Basic IPC Memory Management Resource Sharing & Inter-Process Interactions  Issues and Problems (Deadlocks, CS,"— Presentation transcript:

1 1 Overview Processes, Thread Management, Basic IPC Memory Management Resource Sharing & Inter-Process Interactions  Issues and Problems (Deadlocks, CS, ME…)  Task Orderings (Scheduling issues and solutions)  Algorithmic Solutions (Races, ME…)  Program Level Solutions (Semaphores, Monitors) File Systems, I/O, Distributed OS etc

2 2 Critical Section (CS) & Mutual Exclusion (ME) (or how to do something useful in an OS) Lots and lots of processes and threads in the OS – how to orchestrate them so things run without contentions? ie. how to give each process dedicated access (ME) to shared execution areas (CS) - Does it help if a process starts writing in the same space where another process is reading from? - Does it help if a new process gets allocated the same resource (registers, variables, code segments etc ) already allocated to an existing process (before that process gets to finish)? - Does it help if a slow process blocks other processes from executing?

3 3 Four key conditions need to hold to provide mutual exclusion 1. Unity: No two processes are simultaneously in the CS 2. Fairness: No assumptions can be made about speeds or numbers of CPUs except that each process executes at non-zero speed 3. Progress: No process running outside its CS may block another process (from accessing the CS) 4. Bounded Waiting: No process must wait forever to enter its CS –Deadlocks, Livelocks; –Interrrupt Disabling; –Lock variables; –Strict alternation (busy waiting); –Sleep & Wakeup; –…

4 4 Deadlocks (& Livelocks & Starvation)  Deadlock Definitions  Deadlock - detection and recovery - avoidance - prevention

5 5 The Deadlock Problem A set of (blocked) processes each holding a resource and waiting to acquire a resource held by another process in the set. –2 disk drives A and B ; Processes P 1 and P 2 each hold one disk drive and each needs another one to proceed –4 way traffic intersection with all cars reaching at the same time –Dining philosophers B P1P1 A P2 P2 has needs has

6 6 Deadlocks Definition: A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause Usually the event is release of a currently held resource Deadlock  None of the processes can … –run –release resources –be awakened

7 7 Deadlock Characterization Mutual exclusion: Each resource is either currently assigned to exactly one process or is available. Hold and wait: A process holding at least one resource has requested and is waiting to acquire additional resources currently held by other processes. No preemption: A resource cannot be acquired forcibly from a process holding it; can be released only voluntarily by the process holding it, after that process has completed its task. Circular wait: A circular chain of >1 processes exits! There exists a set {P 0, P 1, …, P n, P 0 } of waiting processes such that P 0 is waiting for a resource that is held by P 1, P 1 is waiting for a resource that is held by P 2, …, P n–1 is waiting for a resource that is held by P n, and P n is waiting for a resource that is held by P 0. Deadlock can arise if all four conditions hold simultaneously

8 8 Deadlock Modeling: Resource Graphs Modeled with directed graphs –process A holding resource R [ in-arrow to process] –process B is waiting (requesting) for resource S [out-arrow from process] –process C and D are in deadlock over resources T and U C has U, wants T D has T, wants U has wants has wants has

9 9 A B C (Visual) Deadlock Modeling Deadlock Occurrence: Strict Request Order Non-competing A, B, C request patterns – Kernel decides actual allocation Cycle!

10 10 Deadlock Modeling (o) (p) (q) Deadlock Avoidance via Change of Request Order by OS! Requires OS to see entire pattern first (non on-the-fly allocation ) and select order! Conceptually nice but naive: (a) Requests are often dynamic, (b) Tasks usually have precedence relations, priorities, timing durations etc: Partial Order & Scheduling Algs

11 11 Deadlock Issues 4 General strategies for dealing with deadlocks – Just ignore the problem altogether (…and it will go away) – Detection and recovery (let it happen, detect and recover) – Avoidance (avoid, by design) - do careful resource allocation – Prevention - negate one of the four necessary conditions for deadlock

12 12 The Windows (upto NT) n’ Unix Way! Pretend there is no problem (and it will go away …)! Actually….its reasonable and works ok… –if cost of prevention is high –If deadlocks occur very rarely or may possibly go away with random back off (timeouts!) for delayed services. Ethernet: Exponential back offs can work just fine… Trade off between –convenience –correctness (repeatable, predictable/deterministic)

13 13 1. Detection with One Resource of Each Type Develop resource ownership and requests graph (non-dynamic allocations) If a cycle can be found within the graph  deadlock  Set up DS (say DFS tree) where cycle is visible by node repetition in DS T holds wants

14 14 Detection with Multiple Resource of Each Type m Remaining P1 holds C 1j resources and requests R 1j resources Resource is either allocated or available x of Type 1, y of Type 2… P1 Pn Allocated + Available =‘s Total Existing

15 15 Detection & Progress via Process Selection! Row i ≤ Row j iff each element(Row i ) ≤ element(Row j ) e.g., ( ) ≤ ( ) or ( ) Find R (Row i ) ≤ A i.e., enough resources available to continue  no deadlock Q: How often to check for free resources? Check =‘s New Process! Costly… plus works only for static & a priori known resource requests P3, runs; New A = ( ) + Next P2 runs with New A = ( )

16 16 2. Recovery from Deadlock Recovery through preemption –take a resource from some other executing process depends on nature of the resource (bandwidth, memory) and if the process is interruptible (computation vs I/O vs RT deadlines) Recovery through rollback –checkpoint a process periodically –use this saved state –restart the process if it is found deadlocked Recovery through killing processes –crudest but simplest way to break a deadlock kill one of the processes in the deadlock cycle the other processes get its resources  choose process that “can” be rerun (either from beginning or an checkpoint) and does not hold up the alive processes with precedence dependencies (eg. I/O, transaction chain)!

17 17 3. Deadlock Avoidance* Simplest and most useful model requires that each process a priori declare the maximum number of resources of each type that it may need. The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition (safe versus unsafe scenarios).  Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes. Requires that the system has some additional a priori information available.

18 18 Deadlock Avoidance Execution & Resource Trajectories Shaded area  ME condition prevents entry there. Thus, if trajectory goes into Box(I 1,I 2,I 6,I 5 ) it only leads to deadlock  indicates “unsafe” state. Process A executing Process B executing Printer access by both (not possible by ME!) Plotter access by both (not possible!) A instruction executing Req. printer Req. plotter Req. printer -- --Req. plotter -- Scheduler chooses to run A or B

19 19 Safe and Unsafe States (a) (b) (c) (d) (e) Safe: If there is a scheduling order that satisfies all processes even if they request their maximum resources at the same time? * Keep in mind that only 1 process can execute at a given time! Available Resources = 10 (Maximal Concurrent Req: 20) B runs with 2; asks for (2 more) for max 4; gets/finishes, C runs with 2; asks for max; gets/finishes; A runs Safe access path starting at (a) Safe =‘s Guarantee for finishing exists

20 20 Safe and Unsafe States Unsafe access starting from (b) Note: This is not a deadlock – just that the “potential” for a deadlock exists IF A/C ask for the max. If they ask for

21 21 Banker's (State) Algorithm for a Single Resource If safe order possible from current “state”, grant access; else deny request (a) safe (b) safe (c) credit Avoiding deadlock by design of permitted safe states! (in reality, criteria for giving “credits/loans” can be based on (a) past behavior of processes (b) bursty traffic analysis (likelihood of run on the bank?), (c) constrained maxima : these form the classical basis for router/server loading decisions for web services!!! DoS attacks make use of this) Safe as can satisfy max C for progress & delay A, B, C. On C finishing, Free = 4 Allowing B or D to ask for max (delay A) And then finish max A Safe as can satisfy max A, B, C or D one at a time – safe order exists! With Free 1, cannot satisfy any max A, B, C request regardless of request order: “potential for deadlock” if all would make requests!

22 22 Algorithm for Multiple Resources? 1.Look for a row in R, whose unmet resource needs are all smaller than or equal to A (available resource vector). If no such row exists, the system will eventually deadlock since no process can run to completion. 2.Assume the process of the row chosen requests all the resources it needs (which is guaranteed to be possible) and finishes. Mark that process as terminated and add all its resources to the vector A. 3.Repeat steps 1 and 2 until either all processes are marked terminated, in which case the initial state was safe, or until a deadlock occurs, in which case it was not. Remember the Deadlock Detection Approach?

23 23 Banker's Algorithm for Multiple Resources Is this algorithm useful in practice? Is it realistic for a process to declare its max resource needs in advances? (Called C in prior Alg) P = ( ) If R (eg D) whose unmet resources are less than A, exists  progress B req Scanner; 1 < 2; OK, allows D,A, E, C to finish E wants Scanner (after B); (1020) reduced to (1000) holding up Others  defer E

24 4. Deadlock Prevention 24 Mutual exclusion: Each resource is either currently assigned to exactly one process or is available. Hold and wait: A process holding at least one resource has requested and is waiting to acquire additional resources currently held by other processes. No preemption: A resource cannot be acquired forcibly from a process holding it; can be released only voluntarily by the process holding it, after that process has completed its task. Circular wait: A circular chain of >1 processes exits! There exists a set {P 0, P 1, …, P n, P 0 } of waiting processes such that P 0 is waiting for a resource that is held by P 1, P 1 is waiting for a resource that is held by P 2, …, P n–1 is waiting for a resource that is held by P n, and P n is waiting for a resource that is held by P 0. Constrain the ways in which requests can be made.

25 25 A. Attacking the Mutual Exclusion Condition Mutual Exclusion – not required for shareable resources (RO files); must hold for non-shareable resources. Some devices (such as printer) can be spooled/buffered –only the (single) printer daemon uses printer resource (no contention!) –deadlock for printer eliminated Con: not all devices can be spooled (spooler contention deadlock?) Con: not all processes can be buffered General: –Minimize shared resource assignment –As few processes as possible actually claim the resource

26 26 B. Attacking the Hold and Wait Condition Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources. Require process to request and be allocated all its resources before it begins execution (no wait), or allow process to request resources only when the process has none. –Low resource utilization; starvation possible. Problems –may not know required resources at start of run –also ties up resources other processes could be using Variation: –process must give up all resources –then request all immediately needed –Realistic?

27 27 C. Attacking the No Preemption Condition No Preemption – –If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released. –Preempted resources are added to the list of resources for which the process is waiting. –Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting. Not always a viable option to preempt – depends on resource & process type Consider a process given the printer –halfway through its job –now forcibly take away printer??

28 28 D.Attacking the Circular Wait Condition Numerically ordered resources: All requests in numerical order  No cycles: if i > j, then A cannot request j; if i < j, then B cannot request it: works, but difficult to find consistent global numbering across distributed systems!!! Option 1: Process entitled to only 1 resource at a time; Has to give up resource if it needs another one. Say, process wants to copy large file from tape to printer – so ??? Fails

29 29 Attacking the Circular Wait Condition

30 30 Alternate Approaches: Two-Phase Locking Phase One –process tries to lock all records it needs, one at a time –if needed record found locked, start over –(no real work done in phase one – only checking & locking resources) If phase one succeeds, only then start second phase of actual progress for –performing updates –releasing locks  Similar to requesting all resources at once though staggered in time  Algorithm works where programmer can handle program stop and start instances  KEY: program is non-monolithic and stoppable with recovery paths built in! [DB Transactions]

31 Livelocks Conducting activity BUT no progress! –Busy waiting –Dining philosophers picking, seeing lack of full resources, putting down and re-trying –… 31

32 32 Starvation Algorithm to allocate a resource –E.g., lets give to shortest job first Works great for multiple short jobs in a system May cause long job to be postponed indefinitely –even though not blocked Solutions: –First-come, first-serve (FIFO) policy –Shortest job first, highest priority, deadline first etc… Scheduling Policies!


Download ppt "1 Overview Processes, Thread Management, Basic IPC Memory Management Resource Sharing & Inter-Process Interactions  Issues and Problems (Deadlocks, CS,"

Similar presentations


Ads by Google