Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.

Similar presentations


Presentation on theme: "1 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization."— Presentation transcript:

1 1 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld Chapter 7 Multiple resources The dinning philosophers problem Chapter 7 Multiple resources The dinning philosophers problem Version: June 2014

2 2 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 A note on the use of these ppt slides: I am making these slides freely available to all (faculty, students, readers). They are in PowerPoint form so you can add, modify, and delete slides and slide content to suit your needs. They obviously represent a lot of work on my part. In return for use, I only ask the following:  That you mention their source, after all, I would like people to use my book!  That you note that they are adapted from (or perhaps identical to) my slides, and note my copyright of this material. Thanks and enjoy! Gadi Taubenfeld All material copyright 2014 Gadi Taubenfeld, All Rights Reserved A note on the use of these ppt slides: I am making these slides freely available to all (faculty, students, readers). They are in PowerPoint form so you can add, modify, and delete slides and slide content to suit your needs. They obviously represent a lot of work on my part. In return for use, I only ask the following:  That you mention their source, after all, I would like people to use my book!  That you note that they are adapted from (or perhaps identical to) my slides, and note my copyright of this material. Thanks and enjoy! Gadi Taubenfeld All material copyright 2014 Gadi Taubenfeld, All Rights Reserved Synchronization Algorithms and Concurrent Programming ISBN: 0131972596, 1 st edition Synchronization Algorithms and Concurrent Programming ISBN: 0131972596, 1 st edition To get the most updated version of these slides go to: http://www.faculty.idc.ac.il/gadi/book.htm

3 3 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 2.1 Deadlocks 2.2 Deadlock Prevention 2.3 Deadlock Avoidance 2.4 The Dining Philosophers 2.5 Hold and Wait Strategy 2.5 Wait and Release Strategy 2.6 Randomized algorithms 2.1 Deadlocks 2.2 Deadlock Prevention 2.3 Deadlock Avoidance 2.4 The Dining Philosophers 2.5 Hold and Wait Strategy 2.5 Wait and Release Strategy 2.6 Randomized algorithms Chapter 7 Multiple Resources Chapter 7 Multiple Resources

4 4 DeadlocksDeadlocks Section 7.1 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014

5 5 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause. DeadlocksDeadlocks

6 6 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Multiple resources How to avoid deadlock? account A account B Transferring money between two bank accounts P 0 P 1 down(A); down(B) down(B); down(A) semaphores A and B, initialized to 1 deadlock

7 7 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Multiple resources How to avoid deadlock? Bridge crossing  On the bridge traffic only in one direction.  The resources are the two entrances.

8 8 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Two Simple Questions Question: A system has 2 processes and 3 identical resources. Each process needs a maximum of 2 resources. Is deadlock possible? Question: Consider a system with X identical resources. The system has 15 processes each needing a maximum of 15 resources. What is the smallest value for X which makes the system deadlock-free (without the need to use a deadlock avoidance algorithm)? No 15×14+1 = 211

9 9 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 QuestionQuestion Question: Two processes, P1 and P2 each need to hold five records 1,2,3,4 and 5 in a database to complete. If P1 asks for them in the order 1,2,3,4,5 and P2 asks them in the same order, deadlock is not possible. However, if P2 asks for them in the order 5,4,3,2,1 then deadlock is possible. With five resources, there are 5! or 120 possible combinations each process can request the resources. Hence there are 5!×5! different algorithms. What is the exact number of algorithms (out of 5!×5!) that is guaranteed to be deadlock free? 5!(4!×4!) = (5!×5!)/5

10 10 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Strategies for dealing with Deadlocks  Just ignore the problem altogether m UNIX and Windows take this approach.  Detection and recovery m Allow the system to enter a deadlock state and then recover.  Avoidance m By careful resource allocation, ensure that the system will never enter a deadlock state.  Prevention m The programmer should write programs that never deadlock. This is achieved by negating one of the four necessary conditions for deadlock to occur (mentioned in the next slide.)  Just ignore the problem altogether m UNIX and Windows take this approach.  Detection and recovery m Allow the system to enter a deadlock state and then recover.  Avoidance m By careful resource allocation, ensure that the system will never enter a deadlock state.  Prevention m The programmer should write programs that never deadlock. This is achieved by negating one of the four necessary conditions for deadlock to occur (mentioned in the next slide.)

11 11 Deadlock Prevention Section 7.2 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014

12 12 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014  Mutual exclusion condition m one process at a time can use the resource.  Hold and wait condition m a process can request (and wait for) a resource while holding another resource.  No preemption condition m A resource can be released only voluntarily by the process holding it.  Circular wait condition m must be a cycle involving several processes, each waiting for a resource held by the next one.  Mutual exclusion condition m one process at a time can use the resource.  Hold and wait condition m a process can request (and wait for) a resource while holding another resource.  No preemption condition m A resource can be released only voluntarily by the process holding it.  Circular wait condition m must be a cycle involving several processes, each waiting for a resource held by the next one. Deadlock Prevention Attacking one of the following conditions for deadlock

13 13 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Attacking the mutual exclusion condition  Some devices (such as printer) can be spooled m only the printer daemon uses printer resource, thus deadlock for printer eliminated  Not all devices can be spooled  attack is not useful in general  Some devices (such as printer) can be spooled m only the printer daemon uses printer resource, thus deadlock for printer eliminated  Not all devices can be spooled  attack is not useful in general Attacking the no preemption condition  Many resources (such as printer) should not be preempted m can not take the printer from a process that has not finished printing yet  attack is not useful in general  Many resources (such as printer) should not be preempted m can not take the printer from a process that has not finished printing yet  attack is not useful in general

14 14 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Attacking the Hold and Wait Condition  Problems m May not know all required resources in advance. m Inefficient : ties up resources other processes could be using. m Starvation is possible.  Problems m May not know all required resources in advance. m Inefficient : ties up resources other processes could be using. m Starvation is possible. Processes may request all the resources they need in advance.

15 15 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Two-Phase Locking Two-Phase Locking (Notice similarity to requesting all resources at once)  Phase one m The process tries to lock all the resources it currently needs, one at a time m if needed record is not avaliable, release and start over  Phase two: when phase one succeeds, m performing updates m releasing locks  Phase one m The process tries to lock all the resources it currently needs, one at a time m if needed record is not avaliable, release and start over  Phase two: when phase one succeeds, m performing updates m releasing locks “livelock” is possible.

16 16 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Time stamps:  Before a process starts locking a unique new timestamp is associated with that process.  If a process has been assigned timestamp T i and later a new process has assigned timestamp T j then T i { "@context": "http://schema.org", "@type": "ImageObject", "contentUrl": "http://images.slideplayer.com/11/3301990/slides/slide_16.jpg", "name": "16 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Time stamps:  Before a process starts locking a unique new timestamp is associated with that process.", "description": " If a process has been assigned timestamp T i and later a new process has assigned timestamp T j then T i

17 17 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Phase one: the process tries to lock all the resources it currently needs, one at a time.  If a needed resource is not available and the timestamp value is smaller than that of the process, m release all the resources, m waits until the resource with the smaller timestamp is released, m and starts over.  Otherwise, if the timestamp of the resource is not smaller, m waits until the resource is released and locks it. Phase two: when phase one succeeds,  performing updates; releasing locks. Phase one: the process tries to lock all the resources it currently needs, one at a time.  If a needed resource is not available and the timestamp value is smaller than that of the process, m release all the resources, m waits until the resource with the smaller timestamp is released, m and starts over.  Otherwise, if the timestamp of the resource is not smaller, m waits until the resource is released and locks it. Phase two: when phase one succeeds,  performing updates; releasing locks. The time-stamping ordering technique Prevents deadlock and starvation.

18 18 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Attacking the Circular Wait Condition Impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration. 1 1 2 2 3 3 4 4 5 5 6 6 7 7 We will see other interesting usage of this observation time account Aaccount B Solves transferring money between two bank accounts

19 19 Deadlock Avoidance Section 7.3 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014

20 20 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Safe and Unsafe States safe unsafe deadlock time All terminated! Deadlock Avoidance

21 21 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Basic Facts  If a system is in safe state  no deadlock.  If a system is in unsafe state  deadlock now or in the future.  Deadlock Avoidance  ensure that a system will never enter an unsafe state.  If a system is in safe state  no deadlock.  If a system is in unsafe state  deadlock now or in the future.  Deadlock Avoidance  ensure that a system will never enter an unsafe state.

22 22 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Maximum Allocation available : 2 P1 P2 P3 19 45 28 Example: Prove that the state below is safe

23 23 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Maximum Allocation ProofProof time available : 8 P1 P2 P3 19 0- 0- Maximum Allocation available : 0 P1 P2 P3 19 0- 88 Maximum Allocation available : 6 P1 P2 P3 19 0- 28 Maximum Allocation available : 1 P1 P2 P3 19 55 28 Maximum Allocation available : 2 P1 P2 P3 19 45 28 available : 0 P1 P2 P3 99 0-- 0 available : 9 P1 P2 P3 0-- 0 0

24 24 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Example: safe and unsafe Maximum Allocation available : 2 P1 P2 P3 19 45 28  If process P1 requests one (out of the 2 avaliable), resources the Banker will not allocated it. available : 1 P1 P2 P3 29 45 28 unsafe state

25 25 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 The Banker’s Algorithm  When there is a request for an available resource, the banker must decide if immediate allocation leaves the system in a safe state.  If the answer is positive, the resource is allocated, otherwise the request is temporarily denied.  When there is a request for an available resource, the banker must decide if immediate allocation leaves the system in a safe state.  If the answer is positive, the resource is allocated, otherwise the request is temporarily denied.  A state is safe if there exists a sequence of all processes such that for each P i, the resources that P i can still request can be satisfied by currently available resources + resources held by all the P j, with j < i.

26 26 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014  Can handle Multiple instances.  Each process must a priori claim maximum use -- a disadvantage.  When a process requests a resource it may have to wait.  When a process gets all its resources it must return them in a finite amount of time.  Can handle Multiple instances.  Each process must a priori claim maximum use -- a disadvantage.  When a process requests a resource it may have to wait.  When a process gets all its resources it must return them in a finite amount of time. The Banker’s Algorithm: commensts

27 27 The Dinning Philosophers Problem Section 7.4 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014

28 28 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Dining Philosophers  Philosophers m think m take forks m eat m put forks  Eating needs 2 forks  Pick one fork at a time  How to prevent deadlock  Philosophers m think m take forks m eat m put forks  Eating needs 2 forks  Pick one fork at a time  How to prevent deadlock

29 29 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 ( means “waiting for this forks”) An incorrect solution L L L L L L

30 30 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 An inefficient solution using mutual exclusion

31 31 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Proving deadlock-freedom & starvation-freedom 1 2 3 4 5 6 R R R R R L Impose a total ordering of all forks, and require that each philosopher requests resources in an increasing order.

32 32 The Hold and Wait Strategy Section 7.5 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014

33 33 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 R L L R R L The LR Solution

34 34 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 1 4 2 5 3 6 R L R L R L The LR Solution Proving deadlock-freedom & starvation-freedom

35 35 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Concurrency Concurrency How many can eat simultaneously?

36 36 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 At most half can eat simultaneously

37 37 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Only one third can eat simultaneously Any algorithm is at most  n/3  -concurrent

38 38 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 In LR only one forth can eat simultaneously The LR algorithm is at most  n/4  -concurrent

39 39 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 If all want to eat, there is a case where only  n/4  will be able to eat simultaneously. R free R L L LRLR ( means “waiting for this forks”)

40 40 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 RobustnessRobustness k-robust: if all except k consecutive philosophers fail, then one will not starve.  Any algorithm is at most  n/3  -robust.  The LR algorithm is not 4-robust.  The LR algorithm is 5-robust iff n is even.  There is no 4-robust algorithm using a hold and wait strategy. k-robust: if all except k consecutive philosophers fail, then one will not starve.  Any algorithm is at most  n/3  -robust.  The LR algorithm is not 4-robust.  The LR algorithm is 5-robust iff n is even.  There is no 4-robust algorithm using a hold and wait strategy.

41 41 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 L L R L R L The LLR Solution

42 42 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 LLRLLR 3 2 1 6 5 0 L L R L L R Proving deadlock-freedom & starvation-freedom

43 43 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 In LLR one third can eat simultaneously A tight bound The LLR algorithm is  n/3  -concurrent

44 44 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 RobustnessRobustness k-robust: if all except k consecutive philosophers fail, then one will not starve.  The LLR algorithm is not 4- robust.  The LLR algorithm is 5-robust iff n  0 mod 3. k-robust: if all except k consecutive philosophers fail, then one will not starve.  The LLR algorithm is not 4- robust.  The LLR algorithm is 5-robust iff n  0 mod 3.

45 45 The Hold and Release Strategy Section 7.6 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014

46 46 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 The LR wait/release Algorithm R L L R R L The LR wait/release algorithm is:  deadlock-free but not starvation-free.   n/3  -concurrent.  3-robust iff n is even.  Recall: There is no 4-robust algorithm using a hold and wait strategy. The LR wait/release algorithm is:  deadlock-free but not starvation-free.   n/3  -concurrent.  3-robust iff n is even.  Recall: There is no 4-robust algorithm using a hold and wait strategy.

47 47 Randomized Algorithms Section 7.7 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014

48 48 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Two Randomized Algorithms The Free Philosophers algorithm is:  deadlock-free with probability 1, but is not starvation-free and is not 2-concurrent.  3-robust with probability 1. The Free Philosophers algorithm is:  deadlock-free with probability 1, but is not starvation-free and is not 2-concurrent.  3-robust with probability 1. The Courteous Philosophers algorithm is:  starvation-free with probability 1, but is not 2-concurrent and is not (n-1)-robust. The Courteous Philosophers algorithm is:  starvation-free with probability 1, but is not 2-concurrent and is not (n-1)-robust.


Download ppt "1 Chapter 7 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization."

Similar presentations


Ads by Google