Presentation is loading. Please wait.

Presentation is loading. Please wait.

David Evans http://www.cs.virginia.edu/~evans Lecture 19: ||ism I don’t think we have found the right programming concepts for parallel computers yet.

Similar presentations


Presentation on theme: "David Evans http://www.cs.virginia.edu/~evans Lecture 19: ||ism I don’t think we have found the right programming concepts for parallel computers yet."— Presentation transcript:

1 David Evans http://www.cs.virginia.edu/~evans
Lecture 19: ||ism I don’t think we have found the right programming concepts for parallel computers yet. When we do, they will almost certainly be very different from anything we know today. Birch Hansen, “Concurrent Pascal” (last sentence), HOPL 1993 My only serious debate with your account is with the very last sentence. I do not believe there is any “right” collection of programming concepts for parallel (or even sequential) computers. The design of a language is always a compromise, in which the designer must take into account the desired level of abstraction, the target machine architecture, and the proposed range of applications. C. A. R. Hoare, comment at HOPL II 1993. Background just got here last week finished degree at MIT week before Philosophy of advising students don’t come to grad school to implement someone else’s idea can get paid more to do that in industry learn to be a researcher important part of that is deciding what problems and ideas are worth spending time on grad students should have their own project looking for students who can come up with their own ideas for research will take good students interested in things I’m interested in – systems, programming languages & compilers, security rest of talk – give you a flavor of the kinds of things I am interested in meant to give you ideas (hopefully even inspiration!) but not meant to suggest what you should work on CS655: Programming Languages University of Virginia Computer Science David Evans

2 University of Virginia CS 655
Menu Readings Policy Challenge Problem (Lecture 17) Techniques for Concurrent Programming Definitions Understanding concurrency primitives 24 April 2019 University of Virginia CS 655

3 University of Virginia CS 655
Remaining Readings Always read the abstract Read the rest if it seems interesting to you Use the time you save not having required readings to: Work on your course project Work on your research We’ve covered enough to: Decide if you are interested in research related to programming languages Give you a solid enough background to not embarrass yourself 24 April 2019 University of Virginia CS 655

4 University of Virginia CS 655
Challenge Problem Prove or disprove: frag1  i := 1; while i < n do x := x * i; i := i + 1; end; i := 0; is observationally equivalent to: frag2  i := n; while i > 0 do x := x * i; i := i - 1; end; i := 0; when n >= 0. Possible approaches Use fixed point machinery to get meaning of both as    functions and show they are equivalent Use induction to get meaning of both for given n, (, n)   and show they are the same for all n Use induction to show frag1(n = 0) defines the same    function as frag2(n=0), and if frag1(n) is equivalent to frag2 (n) then frag1(n+1) is equivalent to frag2(n+1) 24 April 2019 University of Virginia CS 655

5 Sequential Programming
So far, most languages we have seen provide a sequential programming model: Language definition specifies a sequential order of execution Language implementation may attempt to parallelize programs, but they must behave as though they are sequential Exceptions: Algol68, Ada, Java include support for concurrency 24 April 2019 University of Virginia CS 655

6 University of Virginia CS 655
Definitions Concurrency – any model of computation supporting partially ordered time. (Semantic notion) Parallelism – hardware that can execute multiple threads simultaneously Concurrent program my be executed without parallelism; hardware may provide parallelism without concurrency 24 April 2019 University of Virginia CS 655

7 Parallelism without Concurrency
Smart compilers can figure out how to implement a sequential program in parallel. Every parallel computation can be executed sequentially. 24 April 2019 University of Virginia CS 655

8 Concurrent Programming Languages
Expose parallelism to programmer Some problems are clearer to program using explicit parallelism Modularity Don’t have to explicitly interleave code for different abstractions High-level interactions – synchronization, communication Modelling Closer map to real world problems Provide performance benefits of parallelism when compile could not find it automatically 24 April 2019 University of Virginia CS 655

9 University of Virginia CS 655
Fork & Join Concurrency Primitives: fork E  ThreadHandle Creates a new thread that evaluates Expression E; returns a unique handle identifying that thread. join T Waits for thread identified by ThreadHandle T to complete. 24 April 2019 University of Virginia CS 655

10 Bjarfk (BARK with Fork & Join)
Program ::= Instruction* Program is a sequence of instructions Instructions are numbered from 0. Execution begins at instruction 0, and completes with the initial thread halts. Instruction ::= Loc := Expression Loc gets the value of Expression | Loc := FORK Expression Loc gets the value of the ThreadHandle returned by FORK; Starts a new thread at instruction numbered Expression. | JOIN Expression Waits until thread associated with ThreadHandle Expression completes. | HALT Stop thread execution. Expression ::= Literal | Expression + Expression | Expression * Expression 24 April 2019 University of Virginia CS 655

11 University of Virginia CS 655
Bjarfk Program Atomic instructions: a1: R0 := R0 + 1 a2: R0 := R0 + 2 x3: R0 := R0 * 3 Partial Ordering: a1 <= x3 So possible results are, (a1, a2, x3) = 12 (a2, a1, x3) = 9 (a1, x3, a2) = 12 What if assignment instructions are not atomic? [0] R0 := 1 [1] R1 := FORK 10 [2] R2 := FORK 20 [3] JOIN R1 [4] R0 := R0 * 3 [5] JOIN R2 [6] HALT % result in R0 [10] R0 := R0 + 1 [11] HALT [20] R0 := R0 * 2 [21] HALT 24 April 2019 University of Virginia CS 655

12 What formal tool should be use to understand FORK and JOIN?
24 April 2019 University of Virginia CS 655

13 Operational Semantics Game
Real World Abstract Machine Program Initial Configuration Input Function Intermediate Configuration Transition Rules Intermediate Configuration Answer Final Configuration Output Function 24 April 2019 University of Virginia CS 655

14 Structured Operational Semantics
SOS for a language is five-tuple: C Set of configurations for an abstract machine  Transition relation (subset of C x C) I Program  C (input function) F Set of final configurations O F  Answer (output function) 24 April 2019 University of Virginia CS 655

15 Sequential Configurations
Configuration defined by: Array of Instructions Program counter Values in registers (any integer) C = Instructions x PC x RegisterFile …. …. Instruction[-1] Register[-1] Instruction[0] Register[0] PC Instruction[1] Register[1] Instruction[2] Register[2] …. …. 24 April 2019 University of Virginia CS 655

16 Concurrent Configurations
Configuration defined by: Array of Instructions Array of Threads Thread = < ThreadHandle, PC > Values in registers (any integer) C = Instructions x Threads x RegisterFile …. …. Instruction[-1] Register[-1] Instruction[0] Register[0] Thread 1 Instruction[1] Register[1] Instruction[2] Register[2] Thread 2 …. …. Architecture question: Is this SIMD/MIMD/SISD/MISD model? 24 April 2019 University of Virginia CS 655

17 Input Function: I: Program  C
C = Instructions x Threads x RegisterFile where For a Program with n instructions from 0 to n - 1: Instructions[m] = Program[m] for m >= 0 && m < n Instructions[m] = ERROR otherwise RegisterFile[n] = 0 for all integers n Threads = [ <0, 0> ] The top thread (identified with ThreadHandle = 0) starts at PC = 0. 24 April 2019 University of Virginia CS 655

18 University of Virginia CS 655
Final Configurations F = Instructions x Threads x RegisterFile where <0, PC>  Threads and Instructions[PC] = HALT Different possibility: where for all <t, PCt>  Threads, Instructions[PCt] = HALT 24 April 2019 University of Virginia CS 655

19 University of Virginia CS 655
Assignment Note: need rule to deal with Loc := Expression also; can rewrite until we have a literal on RHS. <t, PCt>  Threads & Instructions[PCt] = Loc := Value < Instructions x Threads x RegisterFile >  < Instructions x Threads’ x RegisterFile’ > where Threads = Threads – {<t, PCt>} + {<t, PCt + 1} RegisterFile’[n] = RegisterFile[n] if n  Loc RegisterFile’[n] = value of Value if n  Loc 24 April 2019 University of Virginia CS 655

20 University of Virginia CS 655
Fork <t, PCt>  Threads & Instructions[PCt] = Loc := FORK Literal < Instructions x Threads x RegisterFile >  < Instructions x Threads’ x RegisterFile’ > where Threads = Threads – {<t, PCt>} + {<t, PCt + 1} + { <nt, Literal> } where <nt, x> Threads for all possible x. RegisterFile’[n] = RegisterFile[n] if n  Loc RegisterFile’[n] = value of ThreadHandle nt if n  Loc 24 April 2019 University of Virginia CS 655

21 University of Virginia CS 655
Join <t, PCt>  Threads & Instructions[PCt] = JOIN Value & <v, PCv>  Threads & Instructions[PCv ] = HALT & v = value of Value < Instructions x Threads x RegisterFile >  < Instructions x Threads’ x RegisterFile > where Threads = Threads – {<t, PCt>} + {<t, PCt + 1} 24 April 2019 University of Virginia CS 655

22 University of Virginia CS 655
What else is needed? Can we build all the useful concurrency primitives we need using FORK and JOIN? Can we implement a semaphore? No, need an atomic test and acquire operation 24 April 2019 University of Virginia CS 655

23 University of Virginia CS 655
Locking Statements Program ::= LockDeclaration* Instruction* LockDeclaration ::= PROTECT LockHandle Loc Prohibits reading or writing location Loc in a thread that does not hold the loc LockHandle. Instruction ::= ACQUIRE LockHandle Acquires the lock identified by LockHandle. If another thread has acquired the lock, thread stalls until lock is available. Instruction ::= RELEASE LockHandle Releases the lock identified by LockHandle. 24 April 2019 University of Virginia CS 655

24 University of Virginia CS 655
Locking Semantics C = Instructions x Threads x RegisterFile x Locks where Locks = { < LockHandle, ThreadHandle  free, Loc } I: Program  C same as before with Locks = { <LockHandle, free, Loc> | PROTECT LockHandle Loc  LockDeclarations } 24 April 2019 University of Virginia CS 655

25 University of Virginia CS 655
Acquire <t, PCt>  Threads & Instructions[PCt] = ACQUIRE LockHandle & { < LockHandle, free, S> }  Locks < Instructions x Threads x RegisterFile x Locks >  < Instructions x Threads’ x RegisterFile x Locks’ > where Threads = Threads – {<t, PCt>} + {<t, PCt + 1}; Locks’= Locks – {< LockHandle, free, S>} + {<LockHandle, t, S> } 24 April 2019 University of Virginia CS 655

26 University of Virginia CS 655
Release <t, PCt>  Threads & Instructions[PCt] = RELEASE LockHandle & { < LockHandle, t, S> }  Locks < Instructions x Threads x RegisterFile x Locks >  < Instructions x Threads’ x RegisterFile x Locks’ > where Threads = Threads – {<t, PCt>} + {<t, PCt + 1}; Locks’= Locks – {< LockHandle, t, S>} + {<LockHandle, free, S> } 24 April 2019 University of Virginia CS 655

27 University of Virginia CS 655
New Assignment Rule <t, PCt>  Threads & Instructions[PCt] = Loc := Value & ({ < LockHandle, t, Loc> }  Locks | x { < LockHandle, x, Loc> }  Locks same as old assignment 24 April 2019 University of Virginia CS 655

28 University of Virginia CS 655
Abstractions Can we describe all the concurrency abstractions in Finkel’s chapter using our primitives? Binary semaphore: equivalent to our ACQUIRE/RELEASE Monitor: abstraction using a lock But: no way to set thread priorities with our mechanisms (operational semantics gives no guarantees about which rule is used when multiple rules match) 24 April 2019 University of Virginia CS 655

29 University of Virginia CS 655
Summary Hundreds of different concurrent programming languages [Bal, Steiner, Tanenbaum 1989] lists over 200 papers on 100 different concurrent languages! Primitives are easy (fork, join, acquire, release), finding the right abstractions is hard 24 April 2019 University of Virginia CS 655

30 University of Virginia CS 655
Charge Linda Papers Describes an original approach to concurrent programming Basis for Sun’s JavaSpaces technology (framework for distributed computing using Jini) Project progress You should have working implementations this week Schedule a meeting with me if you are behind schedule 24 April 2019 University of Virginia CS 655


Download ppt "David Evans http://www.cs.virginia.edu/~evans Lecture 19: ||ism I don’t think we have found the right programming concepts for parallel computers yet."

Similar presentations


Ads by Google