Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to distributed systems description relation to practice variables and communication primitives instructions states, actions and programs synchrony.

Similar presentations


Presentation on theme: "Introduction to distributed systems description relation to practice variables and communication primitives instructions states, actions and programs synchrony."— Presentation transcript:

1 Introduction to distributed systems description relation to practice variables and communication primitives instructions states, actions and programs synchrony computations and program properties causality relationships 1

2 What is distributed system Distributed system is a collection of independent processes that communicate with each other and cooperate achieving a common goal a process is set of instructions and variables each process can proceed with its own speed the only way for one process to coordinate with others is via communication thus the system consists of a set of processes connected by a network of communication links 2 Sc - Fred B. Schneider "On Concurrent Programming“ chapter 1

3 Why distributed systems distributed system is a convenient abstraction to model various practical computer architectures:  multiprocess OS  multiprocessor computer architecture  local area network  Internet  VLSI chip the abstraction of the model allows designing algorithms that are correct irrespective of implementation details 3

4 Variables and communication primitives to keep track of its progress a process contains variables a variable is an object, i.e. it has  possible states (domain) – integers, boolean, etc.  current state  applicable operations – read, write, append, reset, etc. an operation may take an argument and return a value variables can be  local – only one process can access  shared – multiple processes can access shared variables we study are of two types:  registers – a variable of a common type (like integer) shared by exactly two processes  (FIFO communication) channel – a queue of messages shared between two processes: sender and receiver, sender can attach messages to the tail, receiver can remove from the head why is a channel a variable?  a distributed system uses either channels or registers and hence called a message- passing or shared memory system 4

5 System topology if two processes share a variable there is a (communication) link between them links and processes comprise the topology of a distributed system common topologies: 5 doubly linked ringstarring fully connectedpartially connectedtree

6 Instructions the instructions for the process can be specified in two styles  program actions (guarded commands) – command is executed when the boolean predicate (guard) is true - easier to reason about boolean predicate  command  …  boolean predicate  command  program counter-based – commands are executed in sequences – (somewhat) easier to implement command; … command; predicate - a Boolean function that evaluates to true or false 6

7 Example Guarded Command Programs 7

8 States, Actions and Programs assignment of values to all variables in distributed system is a (global) state of distributed system.  if the system is message-passing the state includes an assignment of messages to communication channels an atomic operation (action) is an operation that cannot interleave with other atomic operations  an atomic operation takes the system from one state to another  atomicity is hard to determine with program counter-based programs (more on that later) an operation is enabled at a certain state if it can be executed at that state a (distributed) algorithm defines a set of initial states for the system and a set of possible atomic operations for the processes the execution of an action is an event fixpoint (quiescent state) – a state where none of the actions are enabled 8

9 Synchrony the speed with which a process executes its actions is limited by synchrony assumptions about distributed system 9 Synchronous Communication Model.  processes take steps simultaneously (execution proceeds in synchronous rounds)  Time bound = 1. Partially Synchronous Communication Model  processes proceed independently but the maximum difference between the slowest and fastest process exists  Processing speeds and message propagation delay are bounded but at least one of the two bounds is unknown. (Has a priori knowledge.)  The bounds hold after some finite but unknown Global Stabilization Time (GST).  Each process determines the bound by increasing the approximation.  Time bound = T. Asynchronous Communication Model.  processes execute actions in arbitrary order (with arbitrary speeds) – one process may execute arbitrarily faster than another  No time bounds on processing speeds and message delivery delay (time-free).

10 Computations a computation is a sequence of states such that  the first state is an initial state and each consequent state is obtained by executing enabled actions at the preceding state execution semantics – interleaving – one action non-deterministically selected for execution » we shall mostly consider this one – power-set – subset of enabled actions selected for execution » synchronous execution (extreme case of power-set semantics) all enabled actions are executed  a computation is either infinite or ends in a fixpoint a computation is weakly fair if no action is enabled in infinitely many consequent states without being executed (no process can “get stuck” forever)  more formally: if an action is enabled in all but finitely many states, it is executed infinitely often recall definition of program; a program under a certain model (fixed synchrony and fairness assumptions) defines a (possibly infinite) set of computations  to put another way: a program is equivalent to the set computations it defines and the program code is a convenient shorthand to record these computations [Q] how is real time/delay modeled?  claim: for every time-based behavior there is a computation what computation stands for:  very slow process? very fast process? message delayed in a channel? 10

11 Program properties prefix – a finite segment of a computation that starts in an initial state and ends in a state suffix – a computation segment that starts in a state and is either infinite or ends in a fixpoint  a computation is obtained by joining a prefix and a suffix  can two separate computations share a prefix? a suffix? both? property – a set of correct computations  safety – there is a set of prefixes none of which a correct computation may contain (something “bad” never happens)  liveness – for every prefix there is suffix whose addition creates a correct computation (a something “good” eventually happens) theorem: every property is an intersection of safety and liveness properties a program conforms to (or satisfies ) a property if every computation of the program is inside the set of computations defined by the property property is satisfiable if there exists a program that satisfies it  examples of properties that are not satisfiable? 11

12 Example properties: mutual exclusion mutual exclusion problem (MX) – a set of processes alternate executing critical section (CS) of code and other code a solution to mutual exclusion problem has to satisfy three properties:  exclusivity – only one process at a time is allowed to enter CS  progress – some process eventually enters CS  lack of starvation – each process requesting CS is eventually allowed to enter which properties are liveness? safety? both? which are redundant? 12

13 Causality relationship An event is usually influenced by part of the state. two consecutive events influencing disjoint parts of the state are independent and can occur in reverse order this intuition is captured in the notion of causality relation  Two events e and f are causally related if if two events e and f are different events in the same process and e occurs before f then e  f  for message-passing systems: if s is a send event and r is a receive event then s  r  for shared memory systems: two operations on the same data item one of which is a write are causally related  is a partial order (i.e. the relation is transitive, reflexive and antisymmetric, what does it mean?) if not a  b or b  a then a and b are concurrent: a || b two computations are equivalent if they only differ by the order of concurrent operations 13

14 Atomicity, program-counter and message-passing sometimes program-counter based program designers do not explicitly specify atomicity of their programs atomicity can be easily restored for message-passing systems the following sequences of instructions are atomic:  message receipt; local computation/message sending up to the next message receipt or program termination  this atomic action is enabled when a message to be received is at the head of the channel claim: a computation where local computation instructions interleave is equivalent to a computation with the actions of the above atomicity why? corollary: need only to consider actions of this atomicity 14


Download ppt "Introduction to distributed systems description relation to practice variables and communication primitives instructions states, actions and programs synchrony."

Similar presentations


Ads by Google