Presentation is loading. Please wait.

Presentation is loading. Please wait.

COMP60621 Designing for Parallelism

Similar presentations


Presentation on theme: "COMP60621 Designing for Parallelism"— Presentation transcript:

1 COMP60621 Designing for Parallelism
Lecture 3 Introduction to Modelling Concurrency John Gurd, Graham Riley Centre for Novel Computing School of Computer Science University of Manchester November 2012

2 Introduction Sequential and concurrent processes
Sequences of actions and state transition Interleaving model of computation State diagrams Atomicity Consistent memory operations Safety, liveness and fairness Review of modelling assumptions Summary November 2012

3 Overview We will seek to describe what we mean by concurrent behaviour
So we can understand what a parallel program will compute (and why). We will see that real programs are so complex that we need a method to builds models of aspects of their behaviour Both to design concurrent programs and to understand the behaviour of existing ones. We will build models in a modelling language. A good modelling language will be smaller/simpler than a real programming language… We will look at one modelling language – Promela - and an associated tool, Spin, an industrial strength model checker. Aim is a pragmatic appreciation of what such tools can do for parallel program developers. November 2012

4 Model: processes and atomic statements
Processes are units of execution think of as a ‘normal’ procedure in a typical programming language A process executes a series of statements in the order in which they appear in its code a sequential process each statement completes before the next statement executes – i.e. statements are atomic. (We will have to be careful in our choice of what is atomic.) A process has a control pointer (or program counter) which points to the next instruction to be executed. November 2012

5 A concurrent program A concurrent program consists of a finite set of sequential processes. Our model of execution proceeds by executing a sequence of the atomic statements. the sequence is obtained by arbitrarily interleaving the atomic statements from the (sequential) processes. A Computation is an execution sequence that can occur as a result of the interleaving. We will justify this interleaving model as we go along. First, we take a closer look at how the model works. November 2012

6 Computations…interleaving
Consider 2 processes: p composed of statements p1 followed by p2 q composed of statements q1 followed by q2 (assume no statements transfer control E.g. think of statements like assignments; no ‘if’ statements) All possible computations can be constructed… Note each computation respects the sequential orderings of p1 then p2 and q1 then q2 within the sequential processes It seems any arbitrary interleaving is a valid computation We will refine this view later. p1->q1->p2->q2 p1->q1->q2->p2 p1->p2->q1->q2 q1->p1->q2->p2 q1->p1->p2->q2 q1->q2->p1->p2 November 2012

7 State and state transition diagrams
Processes (programs) consist of more than just statements (and a control pointer). Program variables get modified by the statements. The values of variables affected by a statement can be different after execution of a statement, for example, an assignment: e.g. k=0; k ← k+1 Variables have a range of values they can take but, typically, not all values a variable might take will occur in a computation (compare bools with integers or floats/reals). The state of a computation is represented by the values of each variable and the control pointers of the processes involved. The execution of a program is defined by states and (atomic) transitions between states as statements are executed. November 2012

8 A trivial sequential program
This is a language-independent description of an algorithm. - from Ben-Ari. A program consists of a title, declarations of global variables (if any) and a description of the process(es) involved. Here there is one process. A process may have declarations of local variables followed by the statements of the process, which are labelled. November 2012

9 States contain the control pointer (pointing to the next statement)
and the values of local and global variables. Arrows represent transitions between states as statements are executed. In this case, there is an initial state and an end state (from which there is no further transition). November 2012

10 A trivial concurrent program
Here, there are two processes, each declaring a local variable, which is initialised. Each process assigns the value of its local variable to the – shared – global variable. What is the result? (We assume a scheduler is selecting the ‘next’ statement to exectute.) November 2012

11 Interleaving and memory consistency
What happens if two (or more) processes attempt to write different values to the same (global) variable ‘at the same time’? How do we model what happens in real memory hardware? n - global variable ?? p: n=1 q: n=2 Do we get 1, 2, 3 or some other value? Do we get different values every time the writes occur? November 2012

12 Sequentially consistent memory
Real hardware systems provide sequential consistency. The global variable will take one of the values being written to it. The actual value will be determined as if the writes had taken place in some arbitrary order. This leads to us to a model of arbitrary interleaving! In the example, we can model the outcome as: either p1 will occur before q1 OR q1 will occur before p1 The state diagram will allow for both possibilities. Which one occurs on a particular execution of the program depends on the (external) instruction scheduler. November 2012

13 Interleaving model of concurrency
This is a further justification for our interleaving model of concurrency. ‘Actions’, i.e. transitions (the execution of atomic statements) never occur simultaneously In fact, we ignore time when modelling concurrent behaviour… and focus on the order of statements. We consider performance separately. Through performance modelling, for example. Our focus is on (correct) behaviour. i.e. does the program do the right thing (what we are designing it to do), and not do wrong things. We want the model to apply to all real concurrent systems… Think of an ‘all seeing’ oracle noting the order of events. November 2012

14 Interleaving… Only one transition is taken at a time, producing an ordering. This sounds sequential(!) but in a concurrent program there will generally be a choice of transition from a given state. Several statements from different processes may be ready to execute. Parallelism is evident in the orders of statements permitted. In a sequential program there is only ever one ‘next’ statement possible (only one sequential process exists). In a concurrent program, the choice of the ‘next’ statement may be non-deterministic. e.g. a random choice (by the scheduler) from the set of ‘ready’ statements. We can see all this is the state transition diagram! November 2012

15 Note: there are two possible end states!
Left branch: p1 before q1. Right branch: q1 before p1. Note: there are two possible end states! November 2012

16 State diagram and computations
From the initial state, two transitions are possible. In any particular computation, i.e. (re-)run of the program, these will occur in some order. Only one path will be taken. We will not know which until execution time… The sequence of state transitions that actually occurs is called a computation of the program. This is a path through the (reachable) states in the state diagram. Also known as a trail. A concurrent program will have many possible computations. If we want a program to have a certain behaviour (i.e. be correct) we must consider all possible computations that could occur Make sure the behaviour is correct no matter what path is taken. November 2012

17 How big is the state space?
Remember a state consists of the (values of) the set of control pointers of the processes and the (values of any) variables involved. The maximum possible size of the state space depends on how many statements each process has – i.e. on the range of values the control pointers may take - and on the range of values each variable can take. Be careful to count all the possible control pointer values A single statement has two control pointer values associated with it – one before the statement is executed and one after. Compare a sequence of statements and a loop… The maximum size of the state space is the cross product of the ranges of the control points (number of statements) and the variables. November 2012

18 Example Consider three processes (p1,p2,p3) each with 3 statements (i.e. 4 control pointer values) and two shared variables (v1,v2) each taking 3 values… Max size = #p1 x #p2 x #p3 x #v1 x #v2 = 4x4x4x3x3 = 576 states! But programs often have many thousands of statements and many variables taking a wide range of values A boolean variable has 2 values (true and false) a 32 bit integer may have 2^31-1 values! Clearly, state spaces can get large very quickly as problems get bigger. Known as the ‘state explosion problem’. November 2012

19 We will see that we do not expect all the possible states to appear in a computation.
In fact, we rely on this property. Some states will be ‘bad’ states and we hope to show these states cannot be reached in any computation. For example, where mutual exclusion is required, we do not expect the control pointers of more than one process to be at certain statements at any given point in the execution, In the ‘Counter’ interference example (in the previous lecture), there should be no state where both control pointers are placed after their processes ‘read’ statements. so in a correct program we do not expect to see any states (in the state diagram) with this property. November 2012

20 Brief Summary State space quickly gets too large to handle manually
We need to work with models of real programs. Need computerised tools to explore them! Model checkers like Spin are one such tool. Wo not need to construct the entire state space at once It can be constructed as it is explored, as we will see. November 2012

21 Correctness: safety, liveness, fairness
In sequential programs, running a program with the same input will always give the same results. There is only one path through the state space. If the result is not what you expect, i.e. the program is not correct, it can be debugged using breakpoints to examine the state to determine where things go wrong, and ‘bugs’ can be fixed. Simple debugging is dangerous with concurrent programs! Concurrent programs can have many computations which may well give different results as a result of interleaving. Leads to timing bugs and race conditions etc. These may only show up years later when, say, hardware or the OS changes. A concurrent program has to be correct regardless of the path taken. As designers, we have to confident this is so! November 2012

22 Correctness Concurrent programs may have ordinary bugs too
in the sequential code of each process. These have to be fixed first! But usually, we can ignore the details of the sequential behaviour of processes and focus (in our models) on the interaction / interference between them. We find that for concurrent programs we are interested in correctness properties of the computation rather than sequential bugs. There are two main types of properties… November 2012

23 Safety properties Properties which must always be true. For example:
A bank account may be required always to have a positive balance (no matter what transactions are attempted by different people from different ATMs, etc.). A traffic light system will always allow traffic to flow in only one direction at a time. Properties that are always true are called safety properties. For a safety property to hold, it must be the case that the property is true in every state of every computation. Conversely, there should be no state in which the property Is false! Note, safety properties can usually be satisfied by programs that do nothing! So we need more than safety. We need some notion of the computation being alive and making progress… November 2012

24 Liveness properties There are other properties of a system which capture the idea that something must eventually happen. For example: A traffic light system must eventually let all cars through. We don’t say how long they may have to wait! The focus is on behaviour rather than performance here. At an ATM, if you enter your PIN number, eventually your balance will be displayed. In a GUI, if you click the close button, eventually the window will close. These properties are known as liveness properties. For a liveness property to hold, it must be the case that in every computation there is some state in which the property is true: i.e. a state in which the balance is displayed or the window is closed. November 2012

25 Fairness (and starvation)
The idea of fairness can be seen in the following examples: A traffic light system which has the ability to switch the allowed flow of traffic between two directions should not always choose to allow traffic to flow in one of the directions, ignoring the other direction. A system controlling access to a shared bank account should not always choose to ignore one account holder who is trying to access the account. In a parallel program, when statements from a number of processes are ready to execute, a scheduler should not continually select statements from some processes, ignoring one or more others. A process continually ignored is said to be starved. November 2012

26 Weak fairness The notion of fairness leads to a refinement of the idea that any arbitrary interleaving is a valid computation of a parallel program. It does not really make sense to assume that statements from any specific process are never selected in a valid interleaving. This is particularly so if the program is repeatedly executing some overall function, as opposed to simply calculating a result. So, a computation is weakly fair if at any state in the computation a statement that is continually ready to execute eventually appears in the compuation. Think of two processes looping, one attempting to assign true to a shared boolean variable and the other setting it false. In a weakly fair computation, the boolean will be assigned each value at some point(s) in the computation. A weakly fair traffic light system will allow traffic to flow in both directions - at different times! We will always assume weak fairness. November 2012

27 Summary of modelling approach
Model as concurrent sequential processes State consists of local and global variables and the control pointers of each process. State space size is a multiplicative function of the sizes of variables and numbers of statements – quickly huge. Assume arbitrary interleaving of statements. Sequential consistency of memory Ignore time and focus on behaviour (interaction). Concurrency results in a choice of transition from a state. A computation is the path taken through the state space. A concurrent program will have many possible computations and may, therefore, behave differently from run-to-run The program may or may not produce different results though. To check correctness of a parallel program we can define properties that must always be true (in every state) and properties that must eventually be true (in some state of any path). We need tools to help us explore the state space to check these properties. November 2012


Download ppt "COMP60621 Designing for Parallelism"

Similar presentations


Ads by Google