CSP Communicating Sequential Processes

Slides:



Advertisements
Similar presentations
Symmetric Multiprocessors: Synchronization and Sequential Consistency.
Advertisements

1 Data Link Protocols By Erik Reeber. 2 Goals Use SPIN to model-check successively more complex protocols Using the protocols in Tannenbaums 3 rd Edition.
The SPIN System. What is SPIN? Model-checker. Based on automata theory. Allows LTL or automata specification Efficient (on-the-fly model checking, partial.
Modeling issues Book: chapters 4.12, 5.4, 8.4, 10.1.
SCOOP: Simple Concurrent Object-Oriented Programming Extend the pure, strongly typed, object-oriented language Eiffel with a general and powerful concurrency.
Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
3.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Process An operating system executes a variety of programs: Batch system.
Processes Management.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 14: Simulations 1.
COMMUNICATING SEQUENTIAL PROCESSES C. A. R. Hoare The Queen’s University Belfast, North Ireland.
1 Programming Languages (CS 550) Mini Language Interpreter Jeremy R. Johnson.
Models of Concurrency Manna, Pnueli.
One Dimensional Arrays
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Deadlocks, Message Passing Brief refresh from last week Tore Larsen Oct
Compiling Communicating Processes into Delay-Insensitive VLSI Circuits Alain J. Martin Department of Computer Science California Institute of Technology.
1 Chapter 8 Channels. 2 Concurrent Programming Constructs So far we have seen contructs based on shared memory concept (shared directly – buffer - or.
Introduction in algorithms and applications Introduction in algorithms and applications Parallel machines and architectures Parallel machines and architectures.
Review: Process Management Objective: –Enable fair multi-user, multiprocess computing on limited physical resources –Security and efficiency Process: running.
Remote Procedure Call in SR Programming Language By Tze-Kin Tsang 3/20/2000.
Concurrency CS 510: Programming Languages David Walker.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Loops – While, Do, For Repetition Statements Introduction to Arrays
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Asynchronous Message Passing EE 524/CS 561 Wanliang Ma 03/08/2000.
1 Organization of Programming Languages-Cheng (Fall 2004) Concurrency u A PROCESS or THREAD:is a potentially-active execution context. Classic von Neumann.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Concurrency - 1 Tasking Concurrent Programming Declaration, creation, activation, termination Synchronization and communication Time and delays conditional.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
1 Chapter 9 Spaces with LINDA. 2 Linda Linda is an experimental programming concept unlike ADA or Occam which are fully developed production-quality languages.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Advanced Operating Systems CIS 720 Lecture 1. Instructor Dr. Gurdip Singh – 234 Nichols Hall –
Representing distributed algorithms Why do we need these? Don’t we already know a lot about programming? Well, you need to capture the notions of atomicity,
CS5204 – Operating Systems 1 Communicating Sequential Processes (CSP)
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
The Complexity of Distributed Algorithms. Common measures Space complexity How much space is needed per process to run an algorithm? (measured in terms.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Synchronization Methods in Message Passing Model.
Hwajung Lee. Well, you need to capture the notions of atomicity, non-determinism, fairness etc. These concepts are not built into languages like JAVA,
Hwajung Lee. Why do we need these? Don’t we already know a lot about programming? Well, you need to capture the notions of atomicity, non-determinism,
Copyright © Curt Hill Concurrent Execution An Overview for Database.
 Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server.
1 Chapter 11 Global Properties (Distributed Termination)
Channels. Models for Communications Synchronous communications – E.g. Telephone call Asynchronous communications – E.g. .
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Message passing model. buffer producerconsumer PRODUCER-CONSUMER PROBLEM.
Advanced Operating Systems CIS 720
Model and complexity Many measures Space complexity Time complexity
Background on the need for Synchronization
“Language Mechanism for Synchronization”
ECS10 10/10
Hwajung Lee ITEC452 Distributed Computing Lecture 8 Representing Distributed Algorithms.
Chapter 4 LOOPS © Bobby Hoggard, Department of Computer Science, East Carolina University / These slides may not be used or duplicated without permission.
Chapter 4: Processes Process Concept Process Scheduling
Atomicity, Non-determinism, Fairness
Operating System Concepts
Semaphores Chapter 6.
Subject : T0152 – Programming Language Concept
CSE 153 Design of Operating Systems Winter 19
Chapter 6: Synchronization Tools
CS561 Computer Architecture Hye Yeon Kim
Presentation transcript:

CSP Communicating Sequential Processes Prabhaker Mateti

CSP Overview Models distributed computing Idealistic send/receive primitives: send/receive only no shared variables (across outer processes) Idealistic send/receive no buffering sending a value from one process and the receiving of that value in another process appears externally as one event

CSP Synchronous Message Passing Send ! ex: R ! (x*y + c) to process R send the value of expression x*y + c sender names the receiving process R waits until receiver R is ready to receive Receive ? ex: S ? v receiver names the sender process S names the receptacle v a variable declared to be a compatible type waits until sender S is ready to send Deadlock is possible

G1  S1 [] G2  S2 [] ...[] Gn  Sn Gi must not have sends Gi can include at the tail a receive Example: val > 0; Y?P()  Q Suppose val = 0. The above input is not attempted. Suppose val > 0, but Y is not ready to send. Then, we are not committed to perform this input. Suppose val > 0, and Y is ready to send. Then, we perform this input, and execute Q.

Additional CSP Notation Comments := assignment C1; C2 Sequential composition: C2 after C1 C1 || C2 Simult parallel; not C1; C2 or C2;C1 C2 || C1 Same as C1 || C2 if G1  S1 [] G2  S2 [] ...[] Gn  Sn fi If statement; Hoare used [ …]; [] is fat bar do G1  S1 [] G2  S2 [] ...[] Gn  Sn od Loop; Hoare used *[ … ] Multiple Gi can be true; non-determinism Skip Do-nothing (no-op) statement ; Also used to end declarations

Small Set of Integers Design a server that can provide the abstraction of a Small Set of Integers Make it as distributed as possible Make it as symmetric as possible Simplifying Assumptions Finite, say 100 integers any/all integers can be elements query “have you got x” replied with Boolean request “insert x”, no replies if duplicate, no problem if ran out of room, problem … but silently discard

Small Set of Integers: Design An array S of 102 processes S(i: 1..100) each process holds an integer or is empty-handed S(i) has an integer implies S(i-1) also has an integer integer of S(i-1) < integer of S(i) S(101) is a sink S(0) is the “receptionist”

Small Set of Integers S(i: 1 .. 100) :: do n: integer; S(i-1)?has(n)  S(0)!false [] n: integer; S(i-1)?insert(n)  do m: integer; S(i-1)?has(m)  if m <= n  S(0)!(m = n) [] m > n  S(i+1)!has(m) fi [] m: integer; S(i-1)?insert(m)  if m < n  S(i+1)!insert(n); n := m [] m = n  skip [] m > n  S(i+1)!insert(m) od od

Small Set of Integers: S(i) do n: integer; S(i-1)?has(n)  S(0)!false [] n: integer; S(i-1)?insert(n)  do m: integer; S(i-1)?has(m)  if m <= n  S(0)!(m = n) [] m > n  S(i+1)!has(m) fi [] m: integer; S(i-1)?insert(m)  if m < n  S(i+1)!isrt(n); n := m [] m = n  skip [] m > n  S(i+1)!insert(m) od Each S(i) starts out empty handed Any has(whatever) is replied with false. The first inserted value is saved in n After the first insert, the process spends its life in the second loop. Note: we have no breaks or exits.

Small Set of Integers: has(m) do n: integer; S(i-1)?has(n)  S(0)!false [] n: integer; S(i-1)?insert(n)  do m: integer; S(i-1)?has(m)  if m ≤ n  S(0)!(m = n) [] m > n  S(i+1)!has(m) fi [] m: integer; S(i-1)?insert(m)  if m < n  S(i+1)!isrt(n); n := m [] m = n  skip [] m > n  S(i+1)!insert(m) od processes are arranged in a “row” 0 index at left, 101 at right S(i-1)?has(m) (black color) implies S(i) is not empty handed S(i-1) does not have m n is the number S(i) is holding elements are sorted from l-to-r m = n: we have it m < n: no one else has m either reply to the receptionist S(0)

Small Set of Integers: insert(m) do n: integer; S(i-1)?has(n)  S(0)!false [] n: integer; S(i-1)?insert(n)  do m: integer; S(i-1)?has(m)  if m <= n  S(0)!(m = n) [] m > n  S(i+1)!has(m) fi [] m: integer; S(i-1)?insert(m)  if m < n  S(i+1)!isrt(n); n := m [] m = n  skip [] m > n  S(i+1)!insert(m) od no S(j) has m (j < i) Case m = n request to insert a duplicate element do nothing Case m > n to be inserted m is higher ask S(i+1) to insert m Case m < n the number n held by S(i) is higher ask neighbor S(i+1) to hold n S(i) now holds m

Small Set of Integers: S(0) and S(101) S(101) is a sink S(101) :: do S(100)?has(m)  S(0)!false [] S(100)?insert(m)  skip od S(0) is the “receptionist” S(0) :: Client ?has(n) S(1)!has(n); if (i: l.. 100) S(i)? b  Client ! b fi [] Client ? insert(n)  S(1)?insert(n)

Small Set of Integers: Questions Does “distributed” mean “concurrent” and/or “parallel” How do we delete? Can we redesign this into a lossless set of integers?

Matrix Multiplication

Matrix Multiplication [ M(i: 1..3,0) ::WEST || M(0, j: 1..3) ::NORTH || M(i: 1..3,4) ::EAST || M(4, j: 1..3) ::SOUTH || M(i: 1..3, j: 1..3)::CENTER ] Declarations omitted for clarity NORTH = do true  M(1, j)!0 od EAST = do M(i,3)?x  skip od CENTER = do M(i, j - 1)?x  M (i, j+1)!x; M (i-1, j)?sum; M (i+1, j)!(A (i, j)*x+sum) od

Sieve of Eratosthenes Print all primes less than 10000 Design 2, 3, 5, 7, 11, 13, … Design Use an array SIEVE of processes SIEVE(i) filters multiples of i-th prime sqrt(10000) processes are needed SIEVE(0) and SIEVE(101)

Sieve of Eratosthenes [SIEVE(i: 1..100):: SIEVE( i - 1)?p; print ! p; mp := p; do SIEVE(i - 1)? m  do m > mp --> mp := mp + p od; if m = mp --> skip [] m < mp --> SIEVE(i + l)!m fi od || SIEVE(0):: print!2; n := 3; do n < 10000 --> SIEVE(1)!n; n := n + 2 od || SIEVE(101):: do SIEVE(100)?n --> print ! n od || print:: do (i: 0..101) SIEVE(i)?n --> .“print-the-number” n od ]

Shared Variables Duality: variables v. processes Provide a “shared variable” V as a process V User processes do: V ! exp equiv of V := exp V ? u equiv of u := V, u local to this process semaphores, by intention, are shared variables.

Semaphores in CSP S:: val: integer; val := 0; /* or any +int */ do (i:I..100)X(i)?V()  val := val + 1 II (i:l..100) val > 0; X(i)?P()  val := val - 1 od Within the loop: 100 + 100 (unnamed) processes these 200 processes share the integer val yet there is no mutex problem here; why? Array X of 100 client processes can do S!P() or S!V()

Buffered Message Passing CSP send/receive have no buffering == Synchronous Message Passing A Buffer process can be inserted between the (original) sender and (original) receiver. This gives a Semi Asynchronous Message Passing. Sends do not block (until the Buffer becomes full)

Modeling Remote Proc Call RPC client server ! proc5(e1, e2); server? proc5(r1, r2, r3) RPC Server do … [] client?proc4(…)  … [] client?proc5(v1, v2)  … client!proc5(e3, e4, e5) [] … od

Fairness [ X:: Y!stop() ll Y:: c := true; do c; X?stop()  c := false [] c  n := n + 1 od ] Will/Must this terminate? We (programmers) should not assume fairness in the implementation

Process Algebras mathematical theories of concurrency Events on, off, valve.open, valve.close, mouse?(x,y), screen!bitmap Primitive processes STOP (communicates nothing), SKIP (represents successful termination) Event Prefix: e  P Deterministic Choice Nondeterministic Choice Interleaving Interface Parallel Hiding www.usingcsp.com/ has an entire book by Hoare

CSP Implementations Occam is based on CSP Machine Lang of Transputer CPU, 1990s Andrews book, Section 10.x CSP in Java == JCSP Software Release at www.cs.kent.ac.uk/projects/ofa/jcsp/ Communicating Process Architectures Conference == CPA CSP in Jython and Python, CPA 2009 Limbo, Newsqueak are prog langs from Bell Labs, included in the Inferno OS, 2004 Concurrent Event-driven Programming in occam-π for the Arduino, CPA 2011 Experiments in Multi-core and Distributed Parallel Processing using JCSP, CPA 2011 LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework, CPA 2011

CSP References C. A. R. Hoare, ``Communicating Sequential Processes,'' Communications of the ACM, 1978, Vol. 21, No. 8, 666-677. www.usingcsp.com/ has an entire book by Hoare. For now, do not read it (!!) Andrews, Chapter on Synchronous Message Passing. U of Kent, CSP for Java (JCSP), www.cs.kent.ac.uk/projects/ofa/jcsp/