Presentation is loading. Please wait.

Presentation is loading. Please wait.

Universality of Consensus

Similar presentations


Presentation on theme: "Universality of Consensus"— Presentation transcript:

1 Universality of Consensus
Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit

2 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Turing Computability 1 1 A mathematical model of computation Computable = Computable on a T-Machine Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

3 Shared-Memory Computability
cache shared memory Model of asynchronous concurrent computation Computable = Wait-free/Lock-free computable on a multiprocessor Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

4 The Consensus Hierarchy
Can we implement them from any other object that has consensus number ∞? Like compareAndSet()… 1 Read/Write Registers, Snapshots… 2 getAndSet, getAndIncrement, … FIFO Queue, LIFO Stack . As we showed, the power of a machine to compute without locking depends on the synchronization operations (synchrinization primitives) its makes available to the user. Read-write register based objects can implement each other, but cannot implement objects like FIFO queues, which cannot implement objects like multiple assignment. But are we sure we can implement multiple assignment using CAS? Can we also implement FIFO queue using CAS? ∞ compareAndSet,… Multiple Assignment Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

5 Theorem: Universality
Consensus is universal From n-thread consensus build a Wait-free Linearizable n-threaded implementation Of any sequentially specified object Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

6 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Proof Outline A universal construction From n-consensus objects And atomic registers Any wait-free linearizable object Not a practical construction But we know where to start looking … Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

7 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Like a Turing Machine This construction Illustrates what needs to be done Optimization fodder Correctness, not efficiency Why does it work? (Asks the scientist) How does it work? (Asks the engineer) Would you like fries with that? (Asks the liberal arts major) Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

8 A Generic Sequential Object
public interface SeqObject { public abstract Response apply(Invocation invoc); } The sequential object in has an initial state, and each call to \mApply{} has an invocation as its input. The invocation is a description of the invoked method and its arguments. The \mApply{} call applies the invoked method to the object by modifying the generic object's state and returning an appropriate output as its response. For example, a stack would have its invocations be either a push with the appropriate argument or a pop with a void argument. The response of the push would be void and the response of the pop would be the last element pushed. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

9 A Generic Sequential Object
public interface SeqObject { public abstract Response apply(Invocation invoc); } Push:5, Pop:null Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

10 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Invocation public class Invoc { public String method; public Object[] args; } Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

11 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Invocation public class Invoc { public String method; public Object[] args; } Method name Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

12 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Invocation public class Invoc { public String method; public Object[] args; } Arguments Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

13 A Generic Sequential Object
public interface SeqObject { public abstract Response apply(Invocation invoc); } OK, 4 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

14 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Response public class Response { public Object value; } Return value Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

15 A Universal Concurrent Object
public interface SeqObject { public abstract Response apply(Invocation invoc); } A universal concurrent object is a concurrent object that is linearizable to the generic sequential object The sequential object in has an initial state, and each call to \mApply{} has an invocation as its input. The invocation is a description of the invoked method and its arguments. The \mApply{} call applies the invoked method to the object by modifying the generic object's state and returning an appropriate output as its response. For example, a stack would have its invocations be either a push with the appropriate argument or a pop with a void argument. The response of the push would be void and the response of the pop would be the last element pushed. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

16 Start with Lock-Free Universal Construction
First Lock-free: infinitely often some method call finishes. Then Wait-Free: each method call takes a finite number of steps to finish We will start by implementing a lock-free universal construction, and only then move on to develop it into a wait-free solution Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

17 Lock-free Construction: Naïve Idea
Use consensus object to store pointer to cell with current state Each thread creates new cell computes outcome, and tries to switch pointer to its outcome Unfortunately not… consensus objects can be used once only Can’t have one pointer have consensus does on it many times by the same thread. If students ask, trying to have threads have a new decide object in a node and then go back and update the pointer with a simple write based on the decision will also not work because slow Writers may erase what fast ones did. Our solution is actually along these lines, but it has a chain of nodes in each of which the pointer is only set once which is why it works… Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

18 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Naïve Idea deq enq Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

19 Decide which to apply using consensus
Naïve Idea Concurrent Object head enq ? Decide which to apply using consensus deq No good. Each thread can use consensus object only once Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

20 Why only once? Why is consensus object not readable?
Queue based consensus public decide(object value) { propose(value); Ball ball = this.queue.deq(); if (ball == Ball.RED) return proposed[i]; else return proposed[1-i]; } Recall our example of queue based consensus, its not obvious how to use the queue to allow reuse of the object or how to allow reading the state of the consensus object without modifying it… Solved one time 2-consensus. Not clear how to allow reuse of object or reading its state… Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007 (1)

21 Improved Idea: Linked-List Representation
Each node contains a fresh consensus object used to decide on next operation deq tail enq enq enq Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

22 Universal Construction
Object represented as Initial Object State A Log: a linked list of the method calls New method call Find end of list Atomically append call Compute response by traversing the log upto the call Traverse the list and compute the response by applying all the method calls from the initial state. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

23 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Basic Idea Use one-time consensus object to decide next pointer All threads update actual next pointer based on decision OK because they all write the same value Challenges Lock-free means we need to worry what happens if a thread stops in the middle Because we are using a new node every time, we have a consensus object that is used only one time. But as we will explain later, it cannot be read, so we need the actual pointer to be a regular memory location. Because All threads write the same value to this location, there will not be a problem. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

24 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Basic Data Structures public class Node implements java.lang.Comparable { public Invoc invoc; public Consensus<Node> decideNext; public Node next; public int seq; public Node(Invoc invoc) { invoc = invoc; decideNext = new Consensus<Node>() seq = 0; } Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

25 Standard interface for class whose objects are totally ordered
Basic Data Structures public class Node implements java.lang.Comparable { public Invoc invoc; public Consensus<Node> decideNext; public Node next; public int seq; public Node(Invoc invoc) { invoc = invoc; decideNext = new Consensus<Node>() seq = 0; } Standard interface for class whose objects are totally ordered Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

26 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Basic Data Structures public class Node implements java.lang.Comparable { public Invoc invoc; public Consensus<Node> decideNext; public Node next; public int seq; public Node(Invoc invoc) { invoc = invoc; decideNext = new Consensus<Node>() seq = 0; } the invocation Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

27 (next method applied to object)
Basic Data Structures public class Node implements java.lang.Comparable { public Invoc invoc; public Consensus<Node> decideNext; public Node next; public int seq; public Node(Invoc invoc) { invoc = invoc; decideNext = new Consensus<Node>() seq = 0; } Decide on next node (next method applied to object) Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

28 Basic Data Structures Traversable pointer to next node
public class Node implements java.lang.Comparable { public Invoc invoc; public Consensus<Node> decideNext; public Node next; public int seq; public Node(Invoc invoc) { invoc = invoc; decideNext = new Consensus<Node>() seq = 0; } Traversable pointer to next node (needed because you cannot repeatedly read a consensus object) Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

29 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Basic Data Structures public class Node implements java.lang.Comparable { public Invoc invoc; public Consensus<Node> decideNext; public Node next; public int seq; public Node(Invoc invoc) { invoc = invoc; decideNext = new Consensus<Node>() seq = 0; } Seq number Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

30 Create a new node for a given method invocation
Basic Data Structures Create a new node for a given method invocation public class Node implements java.lang.Comparable { public Invoc invoc; public Consensus<Node> decideNext; public Node next; public int seq; public Node(Invoc invoc) { invoc = invoc; decideNext = new Consensus<Node>() seq = 0; } Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

31 Ptr to cell w/highest Seq Num
Universal Object Seq number, Invoc next node tail decideNext (Consensus Object) 1 2 3 4 head Ptr to cell w/highest Seq Num Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

32 All threads repeatedly modify head…back to where we started?
Universal Object node tail All threads repeatedly modify head…back to where we started? 1 2 3 4 head Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

33 The Solution 1 2 3 4 … node tail head Make head an array
Threads find head by finding Max of nodes pointed to by head array node tail 1 2 3 4 head We need to locate the current head of the log. Unfortunately, we cannot use a simple consensus object because it must be updated repeatedly, and we assume that consensus objects can only be accessed once by each thread. Instead, we create a distributed structure of the kind used in the Bakery algorithm of Chapter~\ref{chapter:mutex}. We represent the head pointer \aHead{} as an $n$-entry array, where \aHead{i} is the last node in the list that thread $i$ has observed, understanding that many of the observed head array entries may be outdated due to concurrency. Initially all entries point to the same dummy node pointed to by the \fTail{}. The returned value for the head is determined by finding the node with the maximum sequence number among the nodes read in the head array. The method $\max()$ in Figure~\ref{figure:node} will return this maximal value, the node in the log with possibly the highest \fSeq{} sequence number, but no smaller than the maximum in the array before $max()$ was called. Ptr to node at front i Thread i updates location i Make head an array Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

34 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Universal Object public class Universal { private Node[] head; private Node tail = new Node(); tail.seq = 1; for (int j=0; j < n; j++){ head[j] = tail } Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

35 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Universal Object public class Universal { private Node[] head; private Node tail = new Node(); tail.seq = 1; for (int j=0; j < n; j++){ head[j] = tail } Head Pointers Array Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

36 Tail is a sentinel node with
Universal Object public class Universal { private Node[] head; private Node tail = new Node(); tail.seq = 1; for (int j=0; j < n; j++){ head[j] = tail } Tail is a sentinel node with sequence number 1 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

37 Tail is a sentinel node with
Universal Object public class Universal { private Node[] head; private Node tail = new Node(); tail.seq = 1; for (int j=0; j < n; j++){ head[j] = tail } Tail is a sentinel node with sequence number 1 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

38 Initially head points to tail
Universal Object public class Universal { private Node[] head; private Node tail = new Node(); tail.seq = 1; for (int j=0; j < n; j++){ head[j] = tail } Initially head points to tail Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

39 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Find Max Head Value public static Node max(Node[] array) { Node max = array[0]; for (int i = 1; i < array.length; i++) if (max.seq < array[i].seq) max = array[i]; return max; } This work like in the bakery algorithm. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

40 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Find Max Head Value public static Node max(Node[] array) { Node max = array[0]; for (int i = 0; i < array.length; i++) if (max.seq < array[i].seq) max = array[i]; return max; } Traverse the array Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

41 Compare the seq nums of nodes pointed to by the array
Find Max Head Value public static Node max(Node[] array) { Node max = array[0]; for (int i = 0; i < array.length; i++) if (max.seq < array[i].seq) max = array[i]; return max; } Compare the seq nums of nodes pointed to by the array Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

42 Return the node with max seq num
Find Max Head Value public static Node max(Node[] array) { Node max = array[0]; for (int i = 0; i < array.length; i++) if (max.seq < array[i].seq) max = array[i]; return max; } Return the node with max seq num Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

43 Universal Application Part I
public Response apply(Invoc invoc) { int i = ThreadID.get(); Node prefer = new node(invoc); while (prefer.seq == 0) { Node before = Node.max(head); Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Part 1 of the universal application. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

44 Universal Application Part I
public Response apply(Invoc invoc) { int i = ThreadID.get(); Node prefer = new node(invoc); while (prefer.seq == 0) { Node before = Node.max(head); Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Apply will have invocation as input and return the appropriate response Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

45 Universal Application Part I
public Response apply(Invoc invoc) { int i = ThreadID.get(); Node prefer = new node(invoc); while (prefer.seq == 0) { Node before = Node.max(head); Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } My id Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

46 Universal Application Part I
public Response apply(Invoc invoc) { int i = ThreadID.get(); Node prefer = new node(invoc); while (prefer.seq == 0) { Node before = Node.max(head); Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } My method call Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

47 Universal Application Part I
public Response apply(Invoc invoc) { int i = ThreadID.get(); Node prefer = new node(invoc); while (prefer.seq == 0) { Node before = Node.max(head); Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } As long as I have not been threaded into list Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

48 Universal Application Part I
public Response apply(Invoc invoc) { int i = ThreadID.get(); Node prefer = new node(invoc); while (prefer.seq == 0) { Node before = Node.max(head); Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Node at head of list that I will try to append to Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

49 Universal Application Part I
public Response apply(Invoc invoc) { int i = ThreadID.get(); Node prefer = new node(invoc); while (prefer.seq == 0) { Node before = Node.max(head); Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Decide winning node; could have already been decided Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

50 Universal Application
public Response apply(Invoc invoc) { int i = ThreadID.get(); Node prefer = new node(invoc); while (prefer.seq == 0) { Node before = Node.max(head); Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Notice that the next pointer could have already been written by another node, in which case the thread is overwriting the locations with the same value. Could have already been set by winner…in which case no affect Set next pointer based on decision Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

51 Universal Application Part I
public Response apply(Invoc invoc) { int i = ThreadID.get(); Node prefer = new node(invoc); while (prefer.seq == 0) { Node before = Node.max(head); Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Notice that the setting could be by a node other than the one whose method call the winning node represents. Set seq number indicating node was appended Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

52 Universal Application Part I
public Response apply(Invoc invoc) { int i = ThreadID.get(); Node prefer = new node(invoc); while (prefer.seq == 0) { Node before = Node.max(head); Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Notice that the setting could be by a node other than the one whose method call the winning node represents. add to head array so new head will be found Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

53 Part II – Compute Response
Red’s method call tail deq() null enq( ) enq( ) Return Private copy of object Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

54 Universal Application Part II
... //compute my response SeqObject MyObject = new SeqObject(); current = tail.next; while (current != prefer){ MyObject.apply(current.invoc); current = current.next; } return MyObject.apply(current.invoc); In this part of the code we compute the response. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

55 Universal Application Part II
... //compute my response SeqObject MyObject = new SeqObject(); current = tail.next; while (current != prefer){ MyObject.apply(current.invoc); current = current.next; } return MyObject.apply(current.invoc); Compute the result by sequentially applying the method calls in the list to a private copy of the object starting from the initial state To compute my response. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

56 Universal Application Part II
... //compute my response SeqObject MyObject = new SeqObject(); current = tail.next; while (current != prefer){ MyObject.apply(current.invoc); current = current.next; } return MyObject.apply(current.invoc); To compute my response. Start with initialized copy of the sequential object Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

57 Universal Application Part II
... //compute my response SeqObject MyObject = new SeqObject(); current = tail.next; while (current != prefer){ MyObject.apply(current.invoc); current = current.next; } return MyObject.apply(current.invoc); To compute my response. First new method call is appended after the tail Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

58 Universal Application Part II
... //compute my response SeqObject MyObject = new SeqObject(); current = tail.next; while (current != prefer){ MyObject.apply(current.invoc); current = current.next; } return MyObject.apply(current.invoc); To compute my response. While not reached my own method call Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

59 Universal Application Part II
... //compute my response SeqObject MyObject = new SeqObject(); current = tail.next; while (current != prefer){ MyObject.apply(current.invoc); current = current.next; } return MyObject.apply(current.invoc); To compute my response. Apply the current nodes method to object Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

60 Universal Application Part II
... //compute my response SeqObject MyObject = new SeqObject(); current = tail.next; while (current != prefer){ MyObject.apply(current.invoc); current = current.next; } return MyObject.apply(current.invoc); To compute my response. Return the result after applying my own method call Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

61 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Correctness List defines linearized sequential history Thread returns its response based on list order Our construction is correct, that is, it is a linearizable implementation of the sequential object, because the log is immutable and each \mApply{} call can be linearized at the point in which the consensus call adding it to the log was decided. Why is our construction lock-free? We first note that the ``true'' head of the log, the latest node agreed on as the new head, is always added to the head array within a finite number of steps from the moment of decision. This claim follows because the node's predecessor must already appear in the head array, and any node repeatedly attempting to add a new node will repeatedly run the $\max$ function on the head array. It will detect this predecessor, run consensus on its \fDecideNext{}, then update all its fields including the sequence number, and finally add the decided node to its entry of the head array. Thus, the new ``true'' head node is always eventually recorded in the head array. Therefore, the only way a thread can continue to spin indefinitely, failing to add its own node to the log, is if other threads repeatedly succeed in appending their own nodes to the log, continuously changing the new head node. This kind of starvation can occur only if the other nodes are continually completing their associated invocations, implying that the construction is lock-free. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

62 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Lock-freedom Lock-free because A thread moves forward in list Can repeatedly fail to win consensus on “real” head only if another succeeds Consensus winner adds node and completes within a finite number of steps Our construction is correct, that is, it is a linearizable implementation of the sequential object, because the log is immutable and each \mApply{} call can be linearized at the point in which the consensus call adding it to the log was decided. Why is our construction lock-free? We first note that the ``true'' head of the log, the latest node agreed on as the new head, is always added to the head array within a finite number of steps from the moment of decision. This claim follows because the node's predecessor must already appear in the head array, and any node repeatedly attempting to add a new node will repeatedly run the $\max$ function on the head array. It will detect this predecessor, run consensus on its \fDecideNext{}, then update all its fields including the sequence number, and finally add the decided node to its entry of the head array. Thus, the new ``true'' head node is always eventually recorded in the head array. Therefore, the only way a thread can continue to spin indefinitely, failing to add its own node to the log, is if other threads repeatedly succeed in appending their own nodes to the log, continuously changing the new head node. This kind of starvation can occur only if the other nodes are continually completing their associated invocations, implying that the construction is lock-free. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

63 Wait-free Construction
Lock-free construction + announce array Stores (pointer to) node in announce If a thread doesn’t append its node Another thread will see it in array and help append it How do we turn a lock-free algorithm into a wait-free one? The full wait-free algorithm appears in Figure~\ref{figure:universal}. We need to guarantee that every thread completes its \mApply{} within some finite number of steps, that is, no thread starves. To guarantee this property, threads making progress must help more unfortunate threads to complete their calls. The ``helping'' methodology we will show here in a universal context occurs in a more specialized form in many wait-free algorithms. To allow helping, each thread must share with other threads the details of the \mApply{} call that it is trying to complete. We therefore add to our algorithm an $n$-element \aAnnounce{} array, where \aAnnounce{i} is the node thread $i$ is currently trying to append to the list. Initially all entries point to the dummy node, which has a sequence number of $1$. We will say that a thread $i$ \emph{announces} a node when it stores the node in the \aAnnounce{} array at index $i$, and denote the announced node as being in \aAnnounce{i} for a given thread $i$. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

64 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Helping “Announcing” my intention Guarantees progress Even if the scheduler hates me My method call will complete Makes protocol wait-free Otherwise starvation possible Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

65 Wait-free Construction
Ptr to the cell thread i wants to append announce i tail 1 2 3 4 head i Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

66 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
The Announce Array public class Universal { private Node[] announce; private Node[] head; private Node tail = new node(); tail.seq = 1; for (int j=0; j < n; j++){ head[j] = tail; announce[j] = tail }; Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

67 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
The Announce Array public class Universal { private Node[] announce; private Node[] head; private Node tail = new node(); tail.seq = 1; for (int j=0; j < n; j++){ head[j] = tail; announce[j] = tail }; Announce array Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

68 All entries initially point to tail
The Announce Array public class Universal { private Node[] announce; private Node[] head; private Node tail = new node(); tail.seq = 1; for (int j=0; j < n; j++){ head[j] = tail; announce[j] = tail }; All entries initially point to tail Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

69 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
A Cry For Help public Response apply(Invoc invoc) { int i = ThreadID.get(); announce[i] = new Node(invoc); head[i] = Node.max(head); while (announce[i].seq == 0) { // while node not appended to list } Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

70 Announce new method call (node), asking help from others
A Cry For Help public Response apply(Invoc invoc) { int i = ThreadID.get(); announce[i] = new Node(invoc); head[i] = Node.max(head); while (announce[i].seq == 0) { // while node not appended to list } Announce new method call (node), asking help from others Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

71 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
A Cry For Help public Response apply(Invoc invoc) { int i = ThreadID.get(); announce[i] = new Node(invoc); head[i] = Node.max(head); while (announce[i].seq == 0) { // while node not appended to list } Look for end of list Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

72 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
A Cry For Help public Response apply(Invoc invoc) { int i = ThreadID.get(); announce[i] = new Node(invoc); head[i] = Node.max(head); while (announce[i].seq == 0) { // while node not appended to list } While the node I announced is not appended continue main loop. Main loop, while node not appended (either by me or some thread helping me) Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

73 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Main Loop Non-zero sequence number indicates success Thread keeps helping append nodes Until its own node is appended Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

74 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Main Loop while (announce[i].seq == 0) { Node before = head[i]; Node help = announce[(before.seq + 1 % n)]; if (help.seq == 0) prefer = help; else prefer = announce[i]; Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

75 Keep trying until my cell gets a sequence number
Main Loop while (announce[i].seq == 0) { Node before = head[i]; Node help = announce[(before.seq + 1 % n)]; if (help.seq == 0) prefer = help; else prefer = announce[i]; Keep trying until my cell gets a sequence number Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

76 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Main Loop while (announce[i].seq == 0) { Node before = head[i]; Node help = announce[(before.seq + 1 % n)]; if (help.seq == 0) prefer = help; else prefer = announce[i]; Possible end of list Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

77 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Main Loop while (announce[i].seq == 0) { Node before = head[i]; Node help = announce[(before.seq + 1 % n)]; if (help.seq == 0) prefer = help; else prefer = announce[i]; Who do I help? Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

78 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Altruism Choose a thread to “help” If that thread needs help Try to append its node Otherwise append your own Worst case Everyone tries to help same pitiful loser Someone succeeds Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

79 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Help! When last node in list has sequence number k All threads check … Whether thread k+1 mod n wants help If so, try to append her node first Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

80 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Help! First time after thread k+1 announces No guarantees After n more nodes appended Everyone sees that thread k+1 wants help Everyone tries to append that node Someone succeeds Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

81 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Sliding Window Lemma After thread A announces its node No more than n other calls Can start and finish Without appending A’s node The complete proof appears in the textbook. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

82 So all see and help append 4
Helping So all see and help append 4 Thread 4: Help me! Max head +1 = N+4 announce 4 1 2 3 tail N+2 N+3 As if there is a window that slides and the node at the edge if the window gets helped. head Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

83 The Sliding Help Window
announce 3 4 Help 3 Help 4 1 2 3 tail N+2 N+3 As if there is a window that slides and the node at the edge if the window gets helped. Notice that the window slides along the sequence numbers, and the helping is done to the thread ids. head Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

84 In each main loop iteration pick another thread to help
Sliding Help Window while (announce[i].seq == 0) { Node before = head[i]; Node help = announce[(before.seq + 1 % n)]; if (help.seq == 0) prefer = help; else prefer = announce[i]; In each main loop iteration pick another thread to help Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

85 Help if help required, but otherwise it’s all about me!
Sliding Help Window Help if help required, but otherwise it’s all about me! while (announce[i].seq == 0) { Node before = head[i]; Node help = announce[(before.seq + 1 % n)]; if (help.seq == 0) prefer = help; else prefer = announce[i]; Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

86 Rest is Same as Lock-free
while (prefer.seq == 0) { Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

87 Rest is Same as Lock-free
while (prefer.seq == 0) { Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Decide next node to be appended Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

88 Rest is Same as Lock-free
while (prefer.seq == 0) { Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Update next based on decision Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

89 Rest is Same as Lock-free
while (prefer.seq == 0) { Node after = before.decideNext.decide(prefer); before.next = after; after.seq = before.seq + 1; head[i] = after; } Tell world that node is appended Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

90 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Finishing the Job Once thread’s node is linked The rest is again the same as in lock-free alg Compute the result by sequentially applying the method calls in the list to a private copy of the object starting from the initial state Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

91 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Then Same Part II ... //compute my response SeqObject MyObject = new SeqObject(); current = tail.next; while (current != prefer){ MyObject.apply(current.invoc); current = current.next; } return MyObject.apply(current.invoc); In this part of the code we compute the response. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

92 Universal Application Part II
... //compute my response SeqObject MyObject = new SeqObject(); current = tail.next; while (current != prefer){ MyObject.apply(current.invoc); current = current.next; } return MyObject.apply(current.invoc); To compute my response. Return the result after applying my own method call Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

93 Shared-Memory Computability
Universal Object 10011 Wait-free/Lock-free computable = Threads with methods that solve n-consensus Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

94 GetAndSet is not Universal
public class RMWRegister { private int value; public boolean getAndSet(int update) { int prior = this.value; this.value = update; return prior; } Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007 (1)

95 GetAndSet is not Universal
public class RMWRegister { private int value; public boolean getAndSet(int update) { int prior = this.value; this.value = update; return prior; } Consensus number 2 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007 (1)

96 GetAndSet is not Universal
public class RMWRegister { private int value; public boolean getAndSet(int update) { int prior = this.value; this.value = update; return prior; } Not universal for ≥ 3 threads Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007 (1)

97 CompareAndSet is Universal
public class RMWRegister { private int value; public boolean compareAndSet(int expected, int update) { int prior = this.value; if (this.value == expected) { this.value = update; return true; } return false; }} Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007 (1)

98 CompareAndSet is Universal
public class RMWRegister { private int value; public boolean compareAndSet(int expected, int update) { int prior = this.value; if (this.value == expected) { this.value = update; return true; } return false; }} Consensus number ∞ Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007 (1)

99 CompareAndSet is Universal
public class RMWRegister { private int value; public boolean compareAndSet(int expected, int update) { int prior = this.value; if (this.value == expected) { this.value = update; return true; } return false; }} Universal for any number of threads Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007 (1)

100 On Older Architectures
IBM 360 testAndSet (getAndSet) NYU UltraComputer getAndAdd Neither universal Except for 2 threads Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

101 On Newer Architectures
Intel x86, Itanium, SPARC compareAndSet Alpha AXP, PowerPC Load-locked/store-conditional All universal For any number of threads Trend is clear … Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

102 Practical Implications
Any architecture that does not provide a universal primitive has inherent limitations You cannot avoid locking for concurrent data structures … But why do we care? Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

103 Locking and Schedeuling
What are the practical implications of locking? Locking affects the assumptions we need to make on the operating system in order to guarantee progress Lets understand how… Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

104 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Schedeuling The scheduler is a part of the OS that determines Which thread gets to run on which processor How long it runs for A given thread can thus be active, that is, executing instructions, or suspended Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

105 Review Progress Conditions
Deadlock-free: some thread trying to acquire the locks eventually succeeds. Starvation-free: every thread trying to acquire the locks eventually succeeds. Lock-free: some thread calling the method eventually returns. Wait-free: every thread calling the method eventually returns. Obstruction-free: every thread calling the method returns if it executes in isolation for long enough.

106 The Simple Snapshot is Obstruction-Free
Put increasing labels on each entry Collect twice If both agree, We’re done Otherwise, Try again Collect1 Collect2 1 22 7 13 18 12 1 22 7 13 18 12 Re call: if none of the labels (timestamps) changed, then there was a point, after the end of the first collect, and before the start of the next collect, in which none of the registers were written to. The values collected correspond to the values that were all together in memory at that point in time. = © 2007 Herlihy & Shavit

107 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Obstruction-freedom In the simple snapshot alg: The update method is wait-free But the scan is obstruction-free: will complete only if it executes for long enough without concurrent updates. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

108 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
Progress of Methods Some of the above defs refer to locks (part of implementation) or method calls And they ignore the scheduler Lets refine our progress definitions so that they apply to methods, and Take scheduling into account We want to move to clean definitions that do not talk about locks which are part of the implementation. We want a clean definition of progress that talk about abstract methods. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

109 A “Periodic Table” of Progress Conditions
Non-Blocking Blocking Everyone makes progress Wait- free Obstruction- free Starvation- free Someone makes progress Lock- free Deadlock- free

110 Progress Progress conditions relate to method calls of an object
Threads on a multiprocessor never fail: A thread is active if it takes an infinite number of concrete (machine level) steps And is suspended if not.

111 Maximal vs. Minimal For a given history H:
Minimal progress: in every suffix of H, some method call eventually completes. Maximal progress: in every suffix of H, every method call eventually completes. In some sense, the weakest interesting notion of progress requires that the system as a whole continues to advance. Consider a fixed history $H$. A collection of methods of a given object provides \emph{minimal progress} in $H$ if, in every suffix of $H$, some pending active invocation of one of the methods in the collection has a matching response. In other words, there is no point in the history where all threads that called abstract methods in the collection take an infinite number of concrete steps without returning. This condition might, for example, be useful for a thread pool, where we care about advancing the overall computation, but do not care whether individual threads are underutilized. The strongest notion of progress, and arguably the one most programmers actually want, requires that each individual thread continues to advance. A collection of methods of a given object provides \emph{maximal progress} in a history $H$ if in every suffix of $H$, every pending active invocation of a method in the collection has a matching response. In other words, there is no point in the history where a thread that calls the abstract method in the collection takes an infinite number of concrete steps without returning. This condition might be useful for a web server, where each thread represents a customer request, and we care about advancing each individual computation. The condition is the difference between the requirements of a thread pool versus those of a web server. This condition might, for example, be useful for a thread In the latter case the condition might be useful for a web server, where

112 The “Periodic Table” of Progress Conditions
Non-Blocking Blocking Maximal progress Wait- free Obstruction- free Starvation- free Minimal progress Although these progress conditions may have seemed quite different, each provides either minimal or maximal progress with respect to some set of histories. The result is a simple and regular structure illustrated in the ``periodic table'' shown in Figure~\ref{figure:progress} (and its more complete counterpart in Figure~\ref{figure:clash}). These observations may appear so simple as to be obvious in retrospect, but we have never seen them described in this way. There are three dividing lines, two vertical and one horizontal, that split the five conditions. The leftmost vertical line separates dependent conditions from the rest. The lock-free and wait-free properties apply to any histories, while obstruction-freedom, starvation-freedom, and deadlock-freedom require some kind of external scheduler support to guarantee progress. The rightmost vertical line separates the blocking and non-blocking conditions. The lock-free, wait-free, and obstruction-free conditions are non-blocking: if a suspended thread stops at an arbitrary point in a method call, at least some active threads can make progress. The deadlock-free and starvation-free conditions do not have this property. Finally, the horizontal line separates the minimal and maximal progress conditions. The minimal conditions guarantee the system as a whole makes progress while the maximal conditions guarantee that each thread makes progress. For brevity, \emph{minimal} progress properties encompass the lock-free and deadlock-free properties, while \emph{maximal} properties encompass the wait-free, starvation-free, and obstruction-free properties. Later we will see several ways to cross this line. One way is ``helping'' (for lack of space not included in this extended abstract), an algorithmic technique that has threads help others so each and every thread makes progress. However, in many cases, algorithms that employ helping are costly. An alternative and less costly approach is to make additional assumptions on scheduling. Lock- free Deadlock- free

113 The Scheduler’s Role On a multiprocessor progress properties:
Are not about the guarantees a method's implementation provides. They are about the scheduling assumptions needed in order to provide minimal or maximal progress. Thus, the various progress conditions are not about the progress guarantees their implementations must provide. All the properties in the table imply the same thing, maximal progress, yet they differ in the combination of scheduling assumptions necessary for an implementation to provide it. Put differently, programmers design lock-free, obstruction-free, or deadlock-free algorithms, but what they are implicitly assuming is that because of how schedulers on modern multiprocessors work, all method calls eventually complete as if they were wait-free.

114 Fair Scheduling A history is fair if each thread takes an infinite number of steps A method implementation is deadlock-free if it guarantees minimal progress in every fair history, and maximal progress in some fair history. The restriction to fair histories captures the informal requirement that each thread eventually leaves its critical section. The definition does not mention locks or criti- cal sections because progress should be defined in terms of completed method calls, not low-level mechanisms. Moreover, as noted, not all deadlock-free object imple- mentations will have easily recognizable locks and critical sections. The requirement that the implementation provide maximal progress in some fair history is intended to rule out certain pathological cases. For example, the first thread to access an object might lock it and never release the lock. Such an imple- mentation guarantees minimal progress (for the thread holding the lock) in every fair execution, but does not provide maximal progress in any execution. Clearly, such an implementation would not be considered acceptable in practice and is of no interest to us.

115 Starvation Freedom A method implementation is starvation-free if it guarantees maximal progress in every fair history. Progress extends to an object by considering all its methods together .

116 Dependent Progress A progress condition is dependent if it does not guarantee minimal progress in every history, and is independent if it does. The blocking progress conditions (deadlock-freedom, Starvation-freedom) are dependent We say that a Progress is dependent if progress requires scheduler support

117 Non-blocking Independent Conditions
A method implementation is lock-free if it guarantees minimal progress in every history, and maximal progress in some history. A method implementation is wait-free if it guarantees maximal progress in every history. The restriction to fair histories captures the informal requirement that each thread eventually leaves its critical section. The de¯nition does not mention locks or criti- cal sections because progress should be de¯ned in terms of completed method calls, not low-level mechanisms. Moreover, as noted, not all deadlock-free object imple- mentations will have easily recognizable locks and critical sections. The requirement that the implementation provide maximal progress in some fair history is intended to rule out certain pathological cases. For example, the ¯rst thread to access an object might lock it and never release the lock. Such an imple- mentation guarantees minimal progress (for the thread holding the lock) in every fair execution, but does not provide maximal progress in any execution. Clearly, such an implementation would not be considered acceptable in practice and is of no interest to us.

118 The “Periodic Table” of Progress Conditions
Non-Blocking Blocking Maximal progress Wait- free Obstruction- free Starvation- free Minimal progress On multiprocessors progress properties are not about the guarantees a method's implementation provides. Rather, they are about the assumptions one needs to make on the scheduler so that a method's implementation provides minimal or maximal progress. Lock- free Deadlock- free Dependent Independent

119 Uniformly Isolating Schedules
A history is uniformly isolating if, for every k > 0, any thread that takes an infinite number of steps has an interval where it takes at least k contiguous steps Modern systems provide ways of providing isolation…later we will learn about “backoff” and “yeild”. Later is in the next chapter on spin locks

120 A Non-blocking Dependent Condition
A method implementation is obstruction-free if it guarantees maximal progress in every uniformly isolating history. The restriction to fair histories captures the informal requirement that each thread eventually leaves its critical section. The de¯nition does not mention locks or criti- cal sections because progress should be de¯ned in terms of completed method calls, not low-level mechanisms. Moreover, as noted, not all deadlock-free object imple- mentations will have easily recognizable locks and critical sections. The requirement that the implementation provide maximal progress in some fair history is intended to rule out certain pathological cases. For example, the ¯rst thread to access an object might lock it and never release the lock. Such an imple- mentation guarantees minimal progress (for the thread holding the lock) in every fair execution, but does not provide maximal progress in any execution. Clearly, such an implementation would not be considered acceptable in practice and is of no interest to us.

121 The “Periodic Table” of Progress Conditions
Non-Blocking Blocking Uniform iso scheduler Maximal progress Wait- free Fair scheduler Obstruction- free Starvation- free In other words, there is no difference if we use blocking or non-blocking, they guarantee the same thing under the right scheduling assumptions. If I write a starvation-free alg I am assuming that I will get maximal progress, but that it will have to run on a machine with fair scheduling. Fair scheduler Minimal progress Lock- free Deadlock- free Independent Dependent

122 The “Periodic Table” of Progress Conditions
Non-Blocking Blocking Maximal progress Wait- free Obstruction- free Starvation- free In other words, there is no difference if we use blocking or non-blocking, they guarantee the same thing under the right scheduling assumptions. If I write a starvation-free alg I am assuming that I will get maximal progress, but that it will have to run on a machine with fair scheduling. Minimal progress Lock- free Clash- free ? Deadlock- free Independent Dependent

123 Clash-Freedom: the “Einsteinium” of Progress
A method implementation is clash-free if it guarantees minimal progress in every uniformly isolating history. Thm: clash-freedom strictly weaker than obstruction-freedom Like Einsteinium, symbol \emph{Es}, atomic number 99, it does not occur naturally in any measurable quantities and has no commercial value. In the full paper we will show that being clash-free is strictly weaker than being obstruction-free, a result omitted from this extended abstract for lack of space. Clash-freedom thus answers the open question raised by Herlihy, Luchangco, and Moir [6], whether obstruction-freedom is the weakest natural non-blocking progress condition. Unlike Einsteinium is not radioactive but like it has of no commercial importance…

124 Getting from Minimal to Maximal
Non-Blocking Blocking Maximal progress Wait- free Obstruction- free Starvation- free ? But helping is expensive In other words, there is no difference if we use blocking or non-blocking, they guarantee the same thing under the right scheduling assumptions. If I write a starvation-free alg I am assuming that I will get maximal progress, but that it will have to run on a machine with fair scheduling. Minimal progress Lock- free Clash- free ? Deadlock- free Helping Independent Dependent

125 Universal Constructions
Our lock-free universal construction provides minimal progress A scheduler is benevolent for that algorithm if it guarantees maximal progress in every allowable history. Many real-world operating system schedulers are benevolent They do not single out any indiviudual thread

126 Getting from Minimal to Maximal
Universal Lock-free Construction Getting from Minimal to Maximal Universal Wait-free Construction Non-Blocking Blocking Maximal progress Wait- free Obstruction- free Starvation- free ? For a one time object like consensus where each thread executes a method once, wait-free and lock-free are the same… In other words, there is no difference if we use blocking or non-blocking, they guarantee the same thing under the right scheduling assumptions. If I write a starvation-free alg I am assuming that I will get maximal progress, but that it will have to run on a machine with fair scheduling. Minimal progress Lock- free Clash- free ? Deadlock- free Helping Use Wait-free/Lock-free Consensus Objects Independent Dependent

127 Getting from Minimal to Maximal
Universal Wait-free Construction Non-Blocking Blocking Universal Lock-free Construction Maximal progress Wait- free Obstruction- free Starvation- free In other words, there is no difference if we use blocking or non-blocking, they guarantee the same thing under the right scheduling assumptions. If I write a starvation-free alg I am assuming that I will get maximal progress, but that it will have to run on a machine with fair scheduling. Minimal progress Lock- free Clash- free ? Deadlock- free If we use Starvation-free/Deadlock-free Consensus Objects result is respectively Starvation-free/Deadlock-free Independent Dependent

128 Benevolent Schedulers
Consider an algorithm that guarantees minimal progress. A scheduler is benevolent for that algorithm if it guarantees maximal progress in every allowable history. Many real-world operating system schedulers are benevolent They do not single out any indiviudual thread

129 In Practice On a multiprocessor we will write code expecting maximal progress. Progress conditions will then define the scheduling assumptions needed in order to provide it.

130 This Means We will mostly write lock-free and lock-based deadlock-free algorithms… and expect them to behave as if they are wait-free… because modern schedulers can be made benevolent and fair.

131 Principles to Practice
We learned how to define the safety (correctness) and liveness (progress) of concurrent programs and objects We are ready to start the practice of implementing them Next lecture: implementing spin locks on multiprocesor machines… Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007

132 Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007
          This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 License. You are free: to Share — to copy, distribute and transmit the work to Remix — to adapt the work Under the following conditions: Attribution. You must attribute the work to “The Art of Multiprocessor Programming” (but not in any way that suggests that the authors endorse you or your use of the work). Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under the same, similar or a compatible license. For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to Any of the above conditions can be waived if you get permission from the copyright holder. Nothing in this license impairs or restricts the author's moral rights. Art of Multiprocessor Programming© Copyright Herlihy-Shavit 2007


Download ppt "Universality of Consensus"

Similar presentations


Ads by Google