Presentation is loading. Please wait.

# Concurrent & Distributed Systems Lecture 2: Introduction to interacting processes Recap –Concurrent processes could be pseudo-parallel Sharing a single.

## Presentation on theme: "Concurrent & Distributed Systems Lecture 2: Introduction to interacting processes Recap –Concurrent processes could be pseudo-parallel Sharing a single."— Presentation transcript:

Concurrent & Distributed Systems Lecture 2: Introduction to interacting processes Recap –Concurrent processes could be pseudo-parallel Sharing a single CPU context switching between them Having the potential to be truly parallel if > 1 CPU available –Or they could be truly parallel But the distinction is not significant in 99% of cases Except for greater absolute speed and power available with > 1 CPU –Concurrent processes need to be interacting in some way to be interesting as a concurrent system This lecture considers two examples –A programming example – remember one reason for using concurrent programs is to be able to run parallel algorithms to make use of the greater power of parallel CPUs. This example shows that a parallel algorithm can be more efficient even if it runs on just one CPU, pseudo-parallel. –A distributed commercial application typical of very many real world systems. In both these examples, the processes interact and also give us our first look at potential problems in handling concurrent systems.

Concurrent & Distributed Systems A simple parallel algorithm for sorting: 1 Sorting is used extensively and often on a very large scale, so efficient algorithms and ones which can run on parallel machines are a must. Most sorting algorithms compare pairs and swap them if they are in the wrong order: Eg to sort the following list of 10 numbers in highest first order, lots of pairs would get compared and swapped if necessary. 100 6 2 19 3 4 45 -89 23 14…. eg [2 19] might get swapped to give a more ordered list such as: 100 6 19 2 3 4 45 -89 23 14 There are many algorithms, and one measure of how efficient they are is the number of ‘compare pair/swap’ operations they have to do to sort a list of length n. One example often taught to students is the ‘Bubble sort’ algorithm, where the number of compare/swaps is: cs = n(n-1)/2 ~ n 2 /2 So for n=10, cs = 45

Concurrent & Distributed Systems A simple parallel algorithm for sorting: 2 Now consider sorting the list in two halves, of 5 elements each [100 6 2 19 3] and [4 45 -89 23 14] cs = 5(5-1)/2 + 5(5-1)/2 = 20, leaving 2 sorted half lists as below: [100 19 6 3 2] and [45 23 14 4 -89] These need to be collated into a single list, which involves a further 10 compare/select operations (just a bit faster than compare/swap in fact), giving a total number of operations of 30. In general: nops ~ n 2 /2 (standard Bubble sort) nops ~ (n/2) 2 /2 +(n/2) 2 /2 + n = n 2 /4+n (half+half+collate) nops ~ n 2 /2p + n (dividing the sort into p sub-sorts) All this can be done on a single processor, context switching. It’s just a clever algorithm!

Concurrent & Distributed Systems A simple parallel algorithm for sorting: 3 If >1 processor is available, the sub-sorts could be done in parallel, in which case nops = n 2 /2p 2 (in parallel) + n. (We can ignore n when n large). With parallel collation, synchronisation will now be a problem!

Concurrent & Distributed Systems The Travel Agent example: 1 Imagine a (simple) country wide travel company, FastJet –It has offices in many towns –Each office has a terminal to the main HQ computer where all travel information is held (even if this company works with a distributed PC network, the key data about booking will still be in only one place and so this example will still be valid) Consider just 2 offices (London & Bristol) and just one flight (FJ23) to Majorca At either office, the following rough scenario might take place: - 1 customer LC enters, looks at some brochures, decides maybe for Majorca - 2 asks one of the staff (LS) for details - 3 LS logs on to system - 4 LS shows customer some hotel details, weather etc, on screen - 5 LC decides to book onto FJ23, n tickets so far issued for this flight, so OK. - 6 LS allocates FJ23 seat n+1 to LC - 7 LC eventually finds passport in handbag, after search - 8 LS enters LC’s personal details into system - 9 LS issues named ticket for seat n +1 to LC - 10 LC pays - 11 LC uses LS’s system to check some more stuff about Majorca, then leaves. All this might repeat later for another customer, also for FJ23

Concurrent & Distributed Systems The Travel Agent example: 2 Now imagine one afternoon in London & Bristol What goes on in London and Bristol is asynchronous! So we don’t know the precise order of things as seen by the main CPU And there can be many different orderings of what happens.

Concurrent & Distributed Systems The Travel Agent example: 3 In particular, the red bits matter. They are called Critical Sections of the processes Because the steps in each process are asynchronous (humans hesitate etc), the steps where n and tickets are accessed could be in more than one actual order – these are called the different possible scenarios. Imagine that at the start of the afternoon, n=50, then this happens: –α1then n=51, ticket [51] still blank –ß1then n=52, ticket [52] still blank –α2then n=52, ticket [52] issued to LC –β2then n=52, ticket [52] issued to BC ie seat 51 is not allocated, seat 52 has two tickets! Now think about other possible scenarios – some are OK, some not Now think how many scenarios are possible Now think if it matters whether there is is one computer or distributed PC’s CS’s share common resources - n and tickets[ ]

Concurrent & Distributed Systems Introduction to interacting processes: summary The previous two examples show processes interacting and that problems can easily arise if this is not controlled. –In the sorting example, if collating is a concurrent process then it has to be sychronised with the sub-sorts, otherwise the collating will be wrong. –In the Travel Agent example, if processes concurrently share a common resource, they can interfere with each other in its proper logical use and so cause actual errors (mis-allocated tickets). In this case the parts of the processes concerned, the Critical Sections, will need to made mutually exclusive. Both these examples give rise to Safety Problems, ie the outcomes of the processes is wrong. When multiple concurrent processes are asynchronous (which they usually are), there can be many detailed scenarios as to how the logical steps in each process progress in relation to each other – and this may or may not matter. If multi-threaded concurrent processes can produce multiple scenarios, then testing will be a lot harder than for single threaded processes.

Download ppt "Concurrent & Distributed Systems Lecture 2: Introduction to interacting processes Recap –Concurrent processes could be pseudo-parallel Sharing a single."

Similar presentations

Ads by Google