CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 3 (26/01/2006) Instructor: Haifeng YU.

Slides:



Advertisements
Similar presentations
Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
Advertisements

Transaction Management: Concurrency Control CS634 Class 17, Apr 7, 2014 Slides based on “Database Management Systems” 3 rd ed, Ramakrishnan and Gehrke.
1 Chapter 4 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 6 Instructor: Haifeng YU.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 4 Instructor: Haifeng YU.
A Simple Critical Section Protocol There are N concurrent processes P 1,…,P N that share some data. A process accessing the shared data is said to execute.
D u k e S y s t e m s Time, clocks, and consistency and the JMM Jeff Chase Duke University.
CSE 486/586, Spring 2014 CSE 486/586 Distributed Systems Consistency Steve Ko Computer Sciences and Engineering University at Buffalo.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
1 Chapter 22: Elementary Graph Algorithms IV. 2 About this lecture Review of Strongly Connected Components (SCC) in a directed graph Finding all SCC (i.e.,
More Graphs COL 106 Slides from Naveen. Some Terminology for Graph Search A vertex is white if it is undiscovered A vertex is gray if it has been discovered.
Safety Definitions and Inherent Bounds of Transactional Memory Eshcar Hillel.
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 11 Instructor: Haifeng YU.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 7 Instructor: Haifeng YU.
CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Concurrent Transactions Even when there is no “failure,” several transactions can interact to turn a consistent state into an inconsistent state.
Ordering and Consistent Cuts Presented By Biswanath Panda.
CPSC 668Set 16: Distributed Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CS5371 Theory of Computation
Distributed Algorithms (22903) Lecturer: Danny Hendler Shared objects: linearizability, wait-freedom and simulations Most of this presentation is based.
Modeling Software Systems Lecture 2 Book: Chapter 4.
Concurrency. Busy, busy, busy... In production environments, it is unlikely that we can limit our system to just one user at a time. – Consequently, it.
CS294, YelickConsensus, p1 CS Consensus
Concurrency. Correctness Principle A transaction is atomic -- all or none property. If it executes partly, an invalid state is likely to result. A transaction,
Database Management Systems I Alex Coman, Winter 2006
Consistency. Consistency model: –A constraint on the system state observable by applications Examples: –Local/disk memory : –Database: What is consistency?
CS5371 Theory of Computation Lecture 8: Automata Theory VI (PDA, PDA = CFG)
Time, Clocks, and the Ordering of Events in a Distributed System Leslie Lamport (1978) Presented by: Yoav Kantor.
1 © R. Guerraoui Seth Gilbert Professor: Rachid Guerraoui Assistants: M. Kapalka and A. Dragojevic Distributed Programming Laboratory.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 2 (19/01/2006) Instructor: Haifeng YU.
Parallel Programming Philippas Tsigas Chalmers University of Technology Computer Science and Engineering Department © Philippas Tsigas.
Cs3102: Theory of Computation Class 18: Proving Undecidability Spring 2010 University of Virginia David Evans.
Linearizability By Mila Oren 1. Outline  Sequential and concurrent specifications.  Define linearizability (intuition and formal model).  Composability.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
1 © R. Guerraoui Concurrent Algorithms (Overview) Prof R. Guerraoui Distributed Programming Laboratory.
Time Bounds for Shared Objects in Partially Synchronous Systems Jennifer L. Welch Dagstuhl Seminar on Consistency in Distributed Systems Feb 2013.
Lecture 6-1 Computer Science 425 Distributed Systems CS 425 / ECE 428 Fall 2013 Indranil Gupta (Indy) September 12, 2013 Lecture 6 Global Snapshots Reading:
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 8 Instructor: Haifeng YU.
1 Chapter 9 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
6.852: Distributed Algorithms Spring, 2008 Class 13.
1 © R. Guerraoui Regular register algorithms R. Guerraoui Distributed Programming Laboratory lpdwww.epfl.ch.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
Graph Colouring L09: Oct 10. This Lecture Graph coloring is another important problem in graph theory. It also has many applications, including the famous.
Lecture 8 Page 1 CS 111 Online Other Important Synchronization Primitives Semaphores Mutexes Monitors.
Shared Memory Consistency Models. SMP systems support shared memory abstraction: all processors see the whole memory and can perform memory operations.
Common2 extended to stacks and unbound concurrency By:Yehuda Afek Eli Gafni Adam Morrison May 2007 Presentor: Dima Liahovitsky 1.
Efficient Fork-Linearizable Access to Untrusted Shared Memory Presented by: Alex Shraer (Technion) IBM Zurich Research Laboratory Christian Cachin IBM.
CS 203: Introduction to Formal Languages and Automata
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 5 Instructor: Haifeng YU.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 16: Distributed Shared Memory 1.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Replication Steve Ko Computer Sciences and Engineering University at Buffalo.
Ordering of Events in Distributed Systems UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department CS 739 Distributed Systems Andrea C. Arpaci-Dusseau.
Program Correctness. The designer of a distributed system has the responsibility of certifying the correctness of the system before users start using.
“Towards Self Stabilizing Wait Free Shared Memory Objects” By:  Hopeman  Tsigas  Paptriantafilou Presented By: Sumit Sukhramani Kent State University.
1 The Computability of Relaxed Data Structures: Queues and Stacks as Examples The Computability of Relaxed Data Structures: Queues and Stacks as Examples.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 9 Instructor: Haifeng YU.
CSE 486/586 CSE 486/586 Distributed Systems Consistency Steve Ko Computer Sciences and Engineering University at Buffalo.
Agenda  Quick Review  Finish Introduction  Java Threads.
Introduction to distributed systems description relation to practice variables and communication primitives instructions states, actions and programs synchrony.
Chapter 10 Mutual Exclusion Presented by Yisong Jiang.
CSC317 1 At the same time: Breadth-first search tree: If node v is discovered after u then edge uv is added to the tree. We say that u is a predecessor.
Distributed Systems Lecture 6 Global states and snapshots 1.
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Consistency Steve Ko Computer Sciences and Engineering University at Buffalo.
Topological Sort In this topic, we will discuss: Motivations
CSE 486/586 Distributed Systems Consistency --- 1
Concurrent Objects Companion slides for
CSE 486/586 Distributed Systems Consistency --- 1
Distributed Shared Memory
Presentation transcript:

CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 3 (26/01/2006) Instructor: Haifeng YU

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 22 Review of Last Lecture  Why do we need synchronization primitives  Busy waiting waste CPU  But synchronization primitives need OS support  Semaphore:  Using semaphore to solve dining philosophers problem  Avoiding deadlocks  Monitor:  Easier to use than semaphores / more popular  Two kinds of monitors  Using monitor to solve producer-consumer and reader-writer problem

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 23 Today’s Roadmap  Chapter 4 “Consistency Conditions”, Chapter 5.1, 5.2 “Wait-Free Synchronization”  What is consistency? Why do we care about it?  Sequential consistency  Linearizability  Consistency models for registers

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 24 Abstract Data Type  Abstract data type: A piece of data with allowed operations on the data  Integer X, read(), write()  Integer X, increment()  Queue, enqueue(), dequeue()  Stack, push(), pop()  We consider shared abstract data type  Can be accessed by multiple processes  Shared object as a shorthand

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 25 What Is Consistency?  Consistency – a term with thousand definitions  Definition in this course:  Consistency specifies what behavior is allowed when a shared object is accesses by multiple processes  When we say something is “consistent”, we mean it satisfies the specification (according to some given spec)  Specification:  No right or wrong – anything can be a specification  But to be useful, must:  Be sufficiently strong – otherwise the shared object cannot be used in a program  Can be implemented (efficiently) – otherwise remains a theory  Often a trade-off

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 26 Mutual Exclusion Problem: x = x+1  Data type 1:  Integer x  read(), write()  Data type 2:  Integer x  increment()  We were using type 1 to implement type 2 process 0process 1 read x into a register (value read: 0) increment the register (1) write value in register back to x (1) read x into a register (value read: 1) increment the register (2) write value in register back to x (2) we implicitly assumed the value read must be the value written – one possible def of consistency

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 27 Mutual Exclusion Problem  Data type 1:  Integer x  read(), write()  Data type 2:  Integer x  increment()  By saying the execution is not “good”, we implicitly assumed some consistency def for Data type 2 process 0process 1 read x into a register (value read: 0) increment the register (1) read x into a register (value read: 0) increment the register (1) write value in register back to x (1)

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 28 Why do we care about consistency?  Mutual exclusion is actually a way to ensure consistency (based on some consistency definition)  Thus we actually already dealt with consistency…  Our goal in the lecture  To formalize such consistency definition  Explore some other useful consistency definitions

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 29 Formalizing a Parallel System  Operation: A single invocation of a single method of a single shared object by some process  e being an operation  proc(e): The invoking process  obj(e): The object  inv(e): Invocation event (start time)  resp(e): Reply event (finish time)  Two invocation events are the same if invoker, invokee, parameters are the same – inv(p, read, X)  Two response events are the same if invoker, invokee, response are the same – resp(p, read, X, 1) wall clock time / physical time / real time

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 210 (Execution) History  (The definitions in the book is based on operations are slightly harder to understand. We will use a different kind of definitions. But the two are equivalent.)  A history H is a sequence of invocations and responses ordered by wall clock time  For any invocation in H, the corresponding response is required to be in H  One-to-one mapping between all histories and all possible execution of a parallel system inv(p, read, X) inv(q, write, X, 1) resp(p, read, X, 0) resp(q, write, X, OK)

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 211 Sequential History and Concurrent History  A history H is sequential if  Any invocation is always immediately followed by its response  I.e. No interleaving (but interleaving allowed between operations)  Otherwise called concurrent Sequential: inv(p, read, X) resp(p, read, X, 0) inv(q, write, X, 1) resp(q, write, X, OK) concurrent: inv(p, read, X) inv(q, write, X, 1) resp(p, read, X, 0) resp(q, write, X, OK)

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 212 Sequential Legal History and Subhistory  A sequential history H is legal if  All responses satisfies the sequential semantics of the object  Sequential means: Any invocation is always immediately followed by its response  It is possible for a sequential history not to be legal  Process p’s process subhistory of H (H | p):  The subsequence of all events of p  Thus a process subhistory is always sequential  Object o’s object subhistory of H (H | o):  The subsequence of all events of o

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 213 Equivalency and Process Order  Two histories are equivalent if they have the exactly same set of events  Same events imply all responses are the same  Ordering of the events may be different  User only cares about responses – ordering not observable  Process order is a partial order among all events  For any two events of the same process, process order is the same as execution order  No other additional orders

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 214 Sequential Consistency  First defined by Lamport: “…the results … is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by the program”  Use sequential order as a comparison point  Focus on results only – that is what users care about (what if we only allow sequential histories?)  Require that program order is preserved  All our formalism earlier served to formalize the above

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 215 Sequential Consistency  A history H is sequentially consistent if it is equivalent to some legal sequential history S that preserves process order  Use sequential order as a comparison point  Focus on results only – that is what users care about (what if we only allow sequential histories?)  Require that program order is preserved

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 216 Examples write(x,1)ok() read(x)ok(0) P Q  write(x,1) ok() read(x)ok(0) P Q write(x,1) ok() read(x)ok(0) P Q  write(x,1) ok() read(x)ok(0) P Q

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 217 Examples write(x,1) ok() read(x)ok(0) P Q  ? write(x,1) ok() read(x)ok(2) P Q  ? read(x)ok(0)

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 218 Motivation for Linearizability  Sequential consistency arguably the most widely used consistency definition  E.g. Almost all commercial databases  Many multi-processors  But sometimes not strong enough write(x,1) ok() read(x)ok(0) P Q  write(x,1) ok() read(x)ok(0) P Q

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 219 More Formalism  A history H uniquely induces the following “<“ partial order among operations  o1 “<“ o2 iff response of o1 appears in H before invocation of o2  What is the partial order induced by a sequential history? write(x,1) ok() read(x)ok(0) P Q  write(x,1) ok() read(x)ok(0) P Q o1 o2

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 220 Linearizability  Intuitive definition: The execution is equivalent to some execution such that each operation happens instantaneously at some point (called linearization point) between the invocation and response write(x,1) ok() read(x)ok(0) P Q  write(x,1) ok() read(x)ok(0) P Q The above history is sequentially consistent but not linearizable

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 221 Linearizability  Intuitive definition: The execution is equivalent to some execution such that each operation happens instantaneously at some point between the invocation and response  You can directly formalize the above intuition, OR  Formal definition from the textbook: A history H is linearizable if 1. It is equivalent to some legal sequential history S, and 2. The operation partial order induced by H is a subset of the operation partial order induced by S  Are the two definitions the same?

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 222 Linearizability  A linearizable history must be sequentially consistent  Linearizability is arguable the strongest form of consistency people have ever defined write(x,1)ok() read(x)ok(1) P Q Another interesting example read(x)ok(0) o1 o2 o3 R

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 223 Local Property  Linearizability is a local property in the sense that H is linearizable if and only if for any object x, H | x is linearizable  Sequential consistency is not a local property

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 224 Sequential Consistency is Not Local Property enque(x,1) ok() P Q x, y are initially empty queues enque(y,1) ok() deque(x) ok(2) deque(y) ok(1) enque(x,2) ok() enque(y,2) ok() enque(x,1) ok() P Q deque(x) ok(2) enque(x,2) ok() H | x is sequentially consistent: Similarly, H | y is sequentially consistent as well

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 225 Sequential Consistency is Not Local Property enque(x,1) ok() P Q x, y have initial value of 0 enque(y,1) ok() deque(x) ok(2) deque(y) ok(1) enque(x,2) ok() enque(y,2) ok() But H is not sequentially consistent

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 226 Proving That Linearizability is a Local Property  Intuitive definition: The execution is equivalent to some execution such that each operation happens instantaneously at some point between the invocation and response  The proof is trivial if we use the above definition (after formalizing it)

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 227 Proving That Linearizability is a Local Property  Definition from the textbook: A history H is linearizable if 1. It is equivalent to some legal sequential history S, and 2. The operation partial order induced by H is a subset of the operation partial order (in fact, total order) induced by S  Want to show H is linearizable if for any x, H | x is linearizable -- We construct a directed graph among all operations as following: A directed edge is created from o1 to o2 if  o1 and o2 is on the same obj and o1 is before o2 when linearizing H | x (we say o1  o2 due to obj), OR  o1 < o2 as induced by H (i.e., response of o1 appears before invocation of o2 in H) (we say o1  o2 due to H)

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 228 Proving That Linearizability is a Local Property  Lemma: The resulting directed graph does not have a cycle (we prove this later)  Any topological sorting of the above graph gives us a legal serial history S, and the edges in the graph is a subset of the total order induced by S – Theorem proved operations on o1 operations on o2 operations on o3

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 229 Proving That Linearizability is a Local Property  Lemma: The resulting directed graph does not have a cycle  Prove by contradiction operations on obj1 operations on obj2 operations on obj3

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 230 Proving That Linearizability is a Local Property operations on obj1 operations on obj2 operations on obj3  The red cycle are composed of  Edges due to obj3, followed by  Edges due to H, followed by  Edges due to obj2, followed by  Edges due to H, followed by

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 231 Proving That Linearizability is a Local Property operations on obj1 operations on obj2 operations on obj3  Impossible to have  Edges due to some object x, followed by  Edges due to some object y

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 232 Proving That Linearizability is a Local Property operations on obj1 operations on obj2 operations on obj3  To generalize, any cycle must be composed of:  Edges due to some object x, followed by  Edges due to H, followed by  Edges due to some object y, followed by  Edges due to H, followed by  ….

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 233 Proving That Linearizability is a Local Property  To generalize, any cycle must be composed of:  Edges due to some object x  A single edge due to x  Edges due to H  A single edge due to H  Edges due to some object y  A single edge due to y  Edges due to H  A single edge due to H  …. edges due to x H | x is equivalent to a serial history S – S is a total order edges due to H the partial order induced by H is transitive

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 234 Proving That Linearizability is a Local Property  We must have a cycle in the form of  A single edge due to x, followed by  A single edge due to H, followed by  A single edge due to y, followed by  A single edge due to H, followed by  …. edge due to x edge due to Hedge due to y edge due to H A B C D

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 235 Proving That Linearizability is a Local Property edge due to x edge due to Hedge due to y edge due to H A B C D From the edge D  A: DA in H: From the edge A  B: B From the edge B  C: C It is impossible to have C  D due to y

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 236 Consistency Definitions for Registers  Register is a kind of abstract data type: A single value that can be read and written  An (implementation of a) register is called atomic if the implementation always ensures linearizability of the history  An (implementation of a) register is called ?? if the implementation always ensures sequential consistency of the history

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 237 Consistency Definitions for Registers  An (implementation of a) register is called regular if the implementation always ensures that a read returns one of the values written by  Most recent writes, OR  Overlapping writes (if any)  Definition of most recent writes read write the write whose response time is the latest An atomic register must be regular

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 238 Regular  Sequential Consistency ? Sequential Consistency  Regular ? read(0) write(1) Sequentially consistent but not regular P Q read(0) write(1) Regular but not sequentially consistent P Q assuming the initial value of the register is 0 read(1)

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 239 Consistency Definitions for Registers  An (implementation of a) register is called safe if the implementation always ensures that if a read does not have overlapping writes, then it returns one of the values written by  Most recent writes  If there are overlapping writes, a read can return anything  Safe  Sequential Consistency ?  Sequential Consistency  Safe ? A regular register must be safe

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 240 Summary  What is consistency? Why do we care about it?  Sequential consistency  Linearizability  Linearizability is a local property  Consistency models for registers

CS4231 Parallel and Distributed Algorithms AY2006/2007 Semester 241 Homework Assignment  Page 62:  Problem 4.1  If you believe a history is linearizable/serializable, give the equivalent sequential history  If you believe a history is not linearizable/serializable, prove that no equivalent sequential history exists  Slide 26: Prove that linearizability is a local property using the definition on slide 26 (formalize the definition first)  Think about Slide 21: Prove that the two definitions of linearizability are equivalent  Homework due a week from today  Read Chapter 7