Download presentation
Presentation is loading. Please wait.
Published byLouise Cannon Modified over 5 years ago
1
COT 5611 Operating Systems Design Principles Spring 2012
Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM
2
Lecture 8 - Monday February 6
Reading assignment: the class notes “Distributed systems-basic concepts” and “Petri Nets” available online. Last time Distributed snapshots Enforced modularity the client server paradigm Consensus protocols 1/18/2019 Lecture 8
3
Today Consensus protocols Modeling concurrency – Petri Nets 1/18/2019
Lecture 8
4
Consensus protocols and consensus service
Consensus agree to one of several alternates proposed by a number of agents. No fault-tolerant consensus protocol can guarantee progress but protocols which guarantee freedom from inconsistencies (safety) have been developed. Paxos protocols to reach consensus based on a finite state machine approach. Consensus service clients send requests to processes and propose a value and wait for a response; the goal is to get the set of n processes to reach consensus on a single proposed value. The messages are sent through a network. A Byzantine failure in a distributed system could be; an omission failure, e.g., a crash failure, failure to receive a request or to send a response; it could also be a commission failure, e.g., process a request incorrectly, corrupt the local state, and/or send an incorrect or inconsistent response to a request 1/18/2019 Lecture 8
5
Assumptions about the processors and the network
The processes run on processors and communicate through a network. The processors and the network may experience failures, but not Byzantine failures The processors: (operate at arbitrary speeds; have stable storage and may rejoin the protocol after a failure; can send messages to any other processor. The network: may lose, reorder, or duplicate messages; messages are sent asynchronously and may take arbitrary long time to reach the destination. 1/18/2019 Lecture 8
6
Actors and the flow of messages
Each process advocates a value proposed by a client and could play one, two, or all three of the following roles: Acceptors are the persistent, or fault-tolerant, memory of the system who decide which value to choose; Proposers propose a value sent by a client to be chosen by the acceptors; Learners learn which value was chosen and act as the replication factor of the protocol; The leader an elected proposer. Quorum a subset of all acceptors; any two quorums share at least one member. When each process plays all three roles, proposer, acceptor, and learner, the flow of messages can be described as follows: clients send messages to a leader; during normal operations the leader receives the client's command, assigns it a new command number i, and then begins the i-th instance of the consensus algorithm by sending messages to a set of acceptor processes. A proposal a pair (pn,v) with: a unique proposal number pn and a proposed value v. Multiple proposals may propose the same value v. A value is chosen if a simple majority of acceptors have accepted it. 1/18/2019 Lecture 8
7
Paxos algorithm to reach consensus
We need to guarantee that at most one value can be chosen, otherwise there is no consensus. The algorithm has two phases. Phase 1: Proposal preparation: the leader chooses a proposal number pn=k and sends a prepare message to a majority of acceptors requesting: that a proposal with pn < k should not be accepted; the pn < k of the highest number proposal already accepted by each acceptor. Proposal promise: an acceptor must remember the proposal number of the highest proposal number it has ever accepted as well as the highest proposal number it has ever responded to. The acceptor can accept a proposal with pn=k if and only iff it has not responded to a prepare request with pn > k; if it has already replied to a prepare request for a proposal with pn > k then it should not reply. Lost messages are tread as an acceptor that chooses not to respond. 1/18/2019 Lecture 8
8
Paxos algorithm to reach consensus – Phase 2
Accept request: if the majority of acceptors respond, then the leader chooses the value v of the proposal as follows: the value v of the highest proposal number selected from all the responses; an arbitrary value if no proposal was issued by any of the proposers. The proposer sends an accept request message to a quorum of acceptors including (pn=k,v) Accept: If an acceptor receives an accept message for a proposal with the proposal number pn=k It must accept it if and only if it has not already promised to consider proposals with a pn > k. If it accepts the proposal it should register the value v and send an accept message to the propose and to every leaner. If it does not accept the proposal it should ignore the request. 1/18/2019 Lecture 8
9
1/18/2019 Lecture 8
10
Petri Nets Traditional graphs
Vertices model states and the arcs model actions which cause the transition from one state to another. Cannot express the evolution in time of a system. The graph does not contain information about the current state. In 1962 Carl Adam Petri introduced a family of graphs, the Petri Nets to model dynamic systems. PNs are bipartite graphs populated with tokens that flow through the graph. Bipartite graph a graph with two classes of nodes; arcs always connect a node in one class with one or more nodes in the other class. PNs consist of two classes of nodes are places and transitions Petri Nets are also called P/T (Place/Transition) nets; Arcs connect one place with one or more transitions or a transition with one or more places. 1/18/2019 Lecture 8
11
PNs modeling the dynamic behavior of systems
To model the dynamic behavior of systems, the places of a Petri Net contain tokens; Firing of a transition removes tokens from its input places and adds them to its output places. The distribution of tokens in the places of a Petri Net at a given time is called the marking of the net and reflects the state of the system being modeled. 1/18/2019 Lecture 8
12
Petri Nets can model different activities in a distributed system.
A transition may model: the occurrence of an event, the execution of a computational task, the transmission of a packet, a logic statement. The input places of a transition could model: the pre-conditions of an event, the input data for the computational task, the presence of data in an input buffer, the pre-conditions of a logic statement. The output places of a transition could model: the post-conditions associated with an event, the results of the computational task, the presence of data in an output buffer, the conclusions of a logic statement. 1/18/2019 Lecture 8
13
Petri nets can model concurrent activities.
Only one of the transitions t1 and t2 may fire, but not both. Two transitions are concurrent if they are causally independent; concurrent transitions may fire before, after, or in parallel with each other. 1/18/2019 Lecture 8
14
Confusion: when choice and concurrency are mixed
Symmetric confusion two transitions are concurrent and, at the same time, are in conflict with another one; e.g. t1 and t3 are concurrent and, at the same time, they are in conflict with t2 (see (b)) Asymmetric confusion a transition t1 is concurrent with another transition t3 and will be in conflict with t2 if t3 fires before t1 (see (c)) 1/18/2019 Lecture 8
15
More definitions Preset of a transition the set of input places of that transition. Postset of a transition the set of output places of a transition. Preset of a place the set of input transitions of the place Postset of a place the set of output transitions of a place. Enabled transition its input places have the required number og token for the transition to fire. Labeled PNs arcs can be labeled with weights; a weight determines how many token are necessary to enable. An arc is also called a flow relation. Pure net a PN when the weights of all arcs is 1. Initial marking the initial state modeled by a PN characterized by the disposition of tokens. If a net has n places the marking is a vector with n components (p1, p2….pn ) 1/18/2019 Lecture 8
16
Synchronization: in (b) t4 can only fire if the conditions associated with p3 and p4 are satisfied. The markings of the four nets are a (1,0,0,0); b(1,1,0,0); c(1,1,0,0); d(n,n,0,0) 1/18/2019 Lecture 8
17
Petri Nets with inhibitor arcs can model priorities (c)
The process modeled by t2 has a higher priority than the one modeled by t1. If both processes are ready to run, places p1 and p2 hold tokens. When the two processes are ready, transition t2 will fire first, modeling the activation of the second process. Only after t2 is activated transition t1 modeling of activation of the first process, will fire. 1/18/2019 Lecture 8
18
Petri Nets can model exclusion (d)
n concurrent processes in a shared-memory environment; all processes can read at the same time, but only one may write. Place p3 models the process allowed to write, p4 the ones allowed to read, p2 the ones ready to access the shared memory and p1 the running tasks. Transition t2 models the initialization/selection of the process allowed to write and t1 of the processes allowed to read, t3 the completion of a write and t4 the completion of a read. p3 may have at most one token while p4 may have at most n. If all n processes are ready to access the shared memory all n tokens in p2 are consumed when transition t1 fires. However, place p4 may contain n tokens obtained by successive firings of transition t2. 1/18/2019 Lecture 8
19
State machines (a) A state machine all transitions of a state machine have exactly one incoming and one outgoing arc. This topological constraint limits the expressiveness of a state machine, no concurrency is possible. 1/18/2019 Lecture 8
20
Marked graphs (b) Each place has only one incoming and one outgoing arc; thus, marked graphs do no not allow modeling of choice. 1/18/2019 Lecture 8
21
Extended free-choice and asymmetric choice nets
Extended free-choice net if two transition share an input place they must share all places in their presets. In an asymmetric choice net two transitions may share only a subset of their input places. 1/18/2019 Lecture 8
22
PN subclasses State Machines do not model concurrency and synchronization; Marked Graphs do not model choice and conflict; Free Choice Nets do not model confusion; Asymmetric-Choice Nets allow asymmetric confusion but not symmetric one. 1/18/2019 Lecture 8
23
Firing Sequence a nonempty sequence of transitions
Two reads followed by a write followed by a read in the initial state in (d) t1 t4 t1 t4 t2 t3 t1 t4 1/18/2019 Lecture 8
24
Reachability and liveness
Reachability finding if marking sn is reachable from the initial marking s0. Reachability is a fundamental concern for dynamic systems; the reachability problem is decidable, but reachability algorithms require exponential time and space. Liveness a marked Petri Net (N, s0) is live if it is possible to fire any transition starting from the initial marking, s0. The absence of deadlock in a system is guaranteed by the liveness of its net model. 1/18/2019 Lecture 8
25
Incidence Matrix Incidence Matrix given a Petri Net with n transitions and m places, the incidence matrix F = [fi,j] $ is an integer matrix with fi,j =w(i,j) - w(j,i); w(i,j) is the weight of the flow relation (arc) from transition ti to its output place pj and w(j,i) is the weight of the arc from the input place pj to transition ti 1/18/2019 Lecture 8
26
Colored and stochastic Petri Nets
Colored Petri Nets (CPSs) allow tokens of different colors thus, increase the expressivity of the PNs but do not simplify their analysis. Several extension of Petri Nets support performance analysis by associated a random time with each transition. In case of Stochastic Petri Nets (SPNs) a random time elapses between the time a transition is enabled and the moment it fires; this random time allows the model to capture the service time associated with the activity modeled by the transition. Applications of stochastic Petri nets to performance analysis of complex systems is generally limited by the explosion of the state space of the models. Stochastic High-Level Petri Nets (SHLPN) were introduced in 1988; the SHLPNs allow easy identification of classes of equivalent markings even when the corresponding aggregation of states in the Markov domain is not obvious. This aggregation could reduce the size of the state space by one or more orders of magnitude depending on the system being modeled. 1/18/2019 Lecture 8
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.