Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 584 Lecture 14 n Assignment –Program due now –Check the web »Create program model & do calculations »Additional calculation problems n Paper presentations.

Similar presentations


Presentation on theme: "CS 584 Lecture 14 n Assignment –Program due now –Check the web »Create program model & do calculations »Additional calculation problems n Paper presentations."— Presentation transcript:

1 CS 584 Lecture 14 n Assignment –Program due now –Check the web »Create program model & do calculations »Additional calculation problems n Paper presentations on Friday by: –Matt Probst –Glenn Judd

2 High Level Parallel Programming n Message Passing is considered to be low- level parallel programming. n Why not have a high level parallel programming language? –Easier? –More efficient? –Same performance?

3 High Level Parallel Programming n Many high level languages introduced –SR (University of Arizona) –HPF –C* –others n A high level language compiles to message passing code which is then compiled to machine code to be executed.

4 Data Parallel Programming n Parallel & scalar data n Based on operations being performed on each data element of a parallel variable. n A statement executed on parallel data is performed on each element individually n Functions are provided to include the element index in the operation

5 C* n Data parallel language n Based on standard C with extensions n Produce code for both SIMD and MIMD n Accept any standard C program and compile it correctly.

6 C* n Presents a global view abstraction of the parallel machine. n Data can be parallel or scalar –scalar is default n Parallel data is acted upon by new or overloaded operators and statements n Parallel variables are seen and operated on as monolithic vectors or arrays.

7 C* Additions to C n Reserved words –bool, dimof, everywhere, overload, pcoord, shape, where, with, and others n New operators – ?, >?=, % –[] index operator is used as a unary prefix n Overloaded Operators –reductions, etc.

8 Parallel Variables n A shape is an indication of an arrangement of a parallel variable n Parallel variables are declared in 2 Steps –Declare the shape –Declare a variable that is based on the shape n The compiler automatically distributes the variable across the architecture.

9 Declaring Parallel Variables n Shape Declaration –Gives the compiler clues on partitioning –Uses left indexing followed by shape name –shape [10]Sb, [50][30]Sc n Variable declaration –Declare type and shape of each variable –int ai1:Sb, ai2:Sc

10 Parallel Variables n Shapes can be used for all variable types including structures and unions. n Shapes can be dynamically created just the same as arrays are created. –Single dimension only –Fake multiple dimensions

11 Using Parallel Variables n Addressed just like arrays only backwards –index first –[5][6]ai2 = 23; –for (I = 0; I < 10; I++) [I]c = [I]a + [I]b;

12 Parallel Variable Operations n Parallel to scalar reduction –scalar op parallel_variable –x += ai1; –Operators n Result is combined with scalar value!! +=sum&=bitwise AND<?=min -=neg. sum^=bitwise XOR>?=max *=product|=bitwise OR /=1/product

13 Contextualization n Sets up a boolean mask to prevent an operation on certain elements of a shape n Where statement –where (where-expression) then-body –where (where-expression) then-body else else-body

14 Contextualization n The where-expression must be based on the shape that will be operated on. n Example –where (b >= 3) c = b + a n Assignment only occurs in the active positions.

15 The with statement n The with statement is used for selecting the current shape to operate on. n The with-statement must be used to select a current shape before any parallel code may be executed n with (shape-expression) shape-body

16 pcoord n Works on the current shape n Returns the element number n You supply which dimension you are interested in. n int:current pcoord(int dim) –dim == 0 returns the row –dim == 1 returns the column

17 #include #define SIZE 1000 shape [SIZE]span; main(){ double sum; double width; with(span) { double x:span; x = (pcoord (0) + 0.5) * 1.0/SIZE; sum = (+= (4.0/(1 + x*x))); } sum *= 1.0/SIZE; printf("Estimation is %lf\n", sum); }

18 Linda n Consists of several operations that work on a global data space (tuple space) n The operations have been added to several languages. n MIMD programming model –Interaction is through tuple space

19 Tuples n A tuple is an object consisting of: –a key –zero or more arguments »Example ("jim", 88, 1.5) n The key is used for matching

20 Tuple Space n Global data space n Collection of tuples n Tuples may be: –inserted (out) –read (rd and rdp) –deleted(in and inp) –evaluated(eval) »forks a new worker

21 Tuple Space

22 n Updating a tuple –delete - modify - insert n Duplicate key entries is allowed –Non-determinism n inp and rdp guarantee to locate a matching tuple iff a matching tuple must have been added and could not have been removed before the request

23 Example Programs n Database search n Master-Worker n Divide and Conquer

24 procedure manager count = 0 until EOF do read datum from file OUT("datum", datum) count++ enddo best = 0.0 for j = 1 to count IN("score", value) if (value > best) best = value endfor for j = 1 to numworkers OUT("datum", "stop") endfor end

25 procedure worker IN("datum", datum) until datum == "stop" value = compare(datum, target) OUT("score", value) IN("datum", datum) enddo end

26 Tuple Space n Perfect candidate for a database. n Simplifies parallel programming? n Performance? –Consider the implementation of the tuple space.

27 Tuple Space Implementation n Central n What advantages/disadvantages does this implementation present?

28 Tuple Space Implementation n Distributed n What advantages/disadvantages does this implementation present?


Download ppt "CS 584 Lecture 14 n Assignment –Program due now –Check the web »Create program model & do calculations »Additional calculation problems n Paper presentations."

Similar presentations


Ads by Google