Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Language and Algorithms By Dr. Oberta Slotterbeck Computer Science Professor Emerita Hiram College ASC Associative Computing.

Similar presentations


Presentation on theme: "The Language and Algorithms By Dr. Oberta Slotterbeck Computer Science Professor Emerita Hiram College ASC Associative Computing."— Presentation transcript:

1 The Language and Algorithms By Dr. Oberta Slotterbeck Computer Science Professor Emerita Hiram College ASC Associative Computing

2 2 Associative Computers Associative Computer: A SIMD computer with a few additional features supported in hardware. These additional features can be supported (less efficiently) in traditional SIMDs in software. The name “associative” is due to its ability to locate items in the memory of PEs by content rather than location.

3 3 Motivation for ASC Language The STARAN Computer (Goodyear Aerospace, early 1970’s) and later the ASPRO provided the motivation for the ASC model. ASC extends the data parallel programming style to a complete computational model. ASC provides a practical model that supports massive parallelism. An ASC compiler and emulator exist and can be used in teaching.

4 Why Introduce ASC in Classes? We professionals have not yet decided on which computational models should be used for parallel computers. Introducing students to parallel programs in ASC provides them with a different way to think about parallel computing other than using message passing or a shared memory. This approach may well become main stream as thin multi-core massively parallel machines become available again. 4

5 Where and How Has ASC Been Taught? Certainly in Kent State University where Dr. Jerry Potter, the language designer, taught for years. But also used in a variety of courses taught by Dr. Johnnie Baker, both undergraduate and graduate courses, since 1987. Dr. Jerry Potter, Dr. Johnnie Baker, and Dr. Robert Walker have had numerous graduate research projects using the ASC model. 5

6 Where and How Has ASC Been Taught? At Hiram College: –a small liberal arts mainly 4 year college – ~80% of students are the first family member to go to college. –Students are bright, but from rural school systems. In the early 1980s, Dr. Slotterbeck did several senior seminar parallel projects with students using an ASPROL emulator by Goodyear Aerospace on which ASC is based. 6

7 Where and How Has ASC Been Taught? –ASC was introduced in a Parallel Computing class at Hiram College in 2005 as a programming language. –The MPI language was introduced also. –Students found the ASC language to be easier to use than MPI. (verbal comments and class evaluations – not a scientific study.) 7

8 Where and How Has ASC Been Taught? –In a 2009 Algorithm class at Hiram College, ASC was introduced in one class period and an earlier version of the ASC Primer was handed out. –After going through some algorithms, such as the minimal spanning tree algorithm, either an ASC version of the algorithm was introduced or an ASC program of the algorithm was assigned as a homework problem. –Students in this class seemed to remember the algorithms better than earlier classes. (Anecdotal evidence only.) Why? 8

9 Why I Believe ASC Helped Students Remember Algorithms Algorithm descriptions typically do not specify data structures. However, with most languages, data structure choices are required and make up a high percentage of the code and discussion about the algorithm. With ASC, the code is very close to the algorithm description. We’ll examine the MST (minimum spanning tree) algorithm to illustrate this. 9

10 10 Algorithms and Programs Implemented in ASC A wide range of algorithms have been implemented in ASC without the use of the PE network: –Graph Algorithms minimum spanning tree shortest path connected components –Computational Geometry Algorithms convex hull algorithms (Jarvis March, Quickhull, Graham Scan, etc) dynamic hull algorithms

11 11 ASC Algorithms and Programs (not requiring PE network) –String Matching Algorithms all exact substring matches all exact matches with “don’t care” (i.e., wild card) characters. –Algorithms for NP-complete problems traveling salesperson 2-D knapsack. –Data Base Management Software associative data base relational data base

12 12 ASC Algorithms and Programs (not requiring a PE network) –A Two Pass Compiler for ASC – not the one we will be using. This compiler runs on an associative computer & uses ASC parallelism. first pass optimization phase –Two Rule-Based Inference Engines for AI An Expert System OPS-5 interpreter PPL (Parallel Production Language interpreter) –A Context Sensitive Language Interpreter (OPS-5 variables force context sensitivity) –An associative PROLOG interpreter

13 13 Associative Algorithms & Programs (using a network) There are numerous associative algortihms or programs that use a PE network; –2-D Knapsack ASC Algorithm using a 1-D mesh –Image processing algorithms using 1-D mesh –FFT (Fast Fourier Transform) using 1-D nearest neighbor & Flip networks –Matrix Multiplication using 1-D mesh –An Air Traffic Control Program (using Flip network connecting PEs to memory) –Smith-Waterman Sequence Analyzer using a linear network.

14 14 ASC Basics – Visualizing An Associative Computer

15 15 ASC Basics - Overview There are two types of variables for the ASC model, namely –the parallel variables (i.e., ones for the PEs) –the scalar variables (ie., the ones used by the IS). –Scalar variables are essentially global variables. Could replace each scalar variable with its scalar value stored in each entry of a parallel variable. In order to distinguish between variable types, the parallel variables names end with a “$” symbol.

16 16 a[$]b[$]C[$]mask[$] 11341G… 23251G… 35491G… 426G0G… 545G0G… 61231G… 773101G… 821G0G… 93471G… GGGGG… Visualizing an associative computer executing c[$] = a[$] + b[$];

17 17 a[$]b[$]C[$]mask[$] 11381G 23241G 35481G 426G0G 545G0G 61241G 77381G 821G0G 93481G 10GGGGG Visualizing an associative computer executing if (b[$] == 2 || a[$] ==2) then c[$] = 4; else c[$] = 8; endif;

18 18 Four Unusual Hardware Functions MAXVAL (MINVAL) returns the maximum (minimum) value of the specified item among active responders. The MAXDEX (MINDEX) function returns the index of an entry where the maximum (minimum) value of the specified item occurs, among the active responders. This index is also used to retrieve the related fields. These are constant time functions, just as memory accesses are viewed as constant time functions for sequential machines.

19 We Will Do a Step-by-step Trace of an ASC Program This will show 1. How simple ASC code is. 2. What some of the basic features of the language are. 3. What the ASC syntax looks like. If there is time, we’ll run the program so you can see the operation. 19

20 Be Aware There is no fancy GUI. ASC was not intended to be a production language – for example, subroutines are open so recursion is not available. Today, the Linux/Ultrix version is not running. There is no debugger for the Windows version (although there was for Linux). Many of these features could be added. But, as a teaching tool, ASC operates quite well. 20

21 21 An Example – Prim’s Algorithm As we will see, the algorithm is simple. DE HI FC G BA 8 6 5 3 3 2 2 2 1 6 1 4 2 47

22 MST (Prim’s Algorithm) Assume a connected graph has nodes labeled by some identifying letter or number, arcs which are non-directional and have positive weights associated with them. The MST problem is to find a minimum spanning tree. 22

23 23 We need 4 sets of nodes - V0, V1, V2, and V3. DE HI FC G BA 8 6 5 3 3 2 2 2 1 6 1 4 2 47 We will call the sets states.

24 ASC Program for MST The code that follows is all the code (and documentation) needed to solve the MST problem using associative computing techniques and the ASC language, Follow along with Example 6 (pg 8) in “Sample ASC Programs”. That example is very slightly different, but I ask students in class what that difference is and why it does not affect the correctness of the solution. 24

25 Representing a Non-directed Graph - 2 PEs Needed for Each Edge 25 PETAILHEADWEIGHTSTATEXXNXTNODEGRAPH 1ab21 2ba21 3ac41 4ca41 5ef31 6fe31 7ce11 8ec11 9 10

26 Before you panic when thinking about the number of PEs needed……. Remember: –We are talking about massively parallel computers. –Each PE is basically simple. –SIMDs with 64K PEs existed in the past (The Connection Machine) –We’re using memory to gain speed. –Unlike in the past, simple Pes and memory are cheap today. 26

27 27 THE ASC MST PROGRAM Declare variables deflog (TRUE, 1); deflog (FALSE, 0); char parallel tail[$], head[$]; int parallel weight[$], state[$]; State is 1 for nodes in MST, 2 for nodes being considered at a given point, 3 for nodes yet to be considered, and 0 for edges dropped from consideration.

28 28 Defines an edge by holding the node being considered currently. char scalar nodehead, nodetail; Bit to select one of the responders index parallel xx[$]; Graph[$] set to 1 means this PE is working on the data. Result[$] flags the MST nodes. logical parallel nxtnod[$], graph[$], result[$]; associate head[$], tail[$], weight[$], state[$] with graph[$];

29 29 Input graph read tail[$], head[$], weight[$] in graph[$]; Pick 1 st node and edge setscope graph[$] nodetail =tail[mindex(weight[$])]; nodehead = head[mindex(weight[$])]; endsetscope; state[nxtnod[$]] = 1;.

30 30 Select all edges incident with the selected node and put the endpoint of each edge in V2; else put them in V3. if (nodetail == tail[$]) then state[$] = 2; else state[$] = 3; endif; Throw out reverse edges (i.e. put in V0). if (head[$] == nodetail && tail[$] == nodehead) then state[$] = 0; endif;

31 31 Loop until there are no more nodes in V2. while xx in (state[$] == 2) Select lowest order PE holding minimum weight of those in V2. if (state[$] == 2) then nxtnod[$] = mindex(weight[$]); endif; Select the head node in the PE chosen above nodehead = head[nxtnod[$]]; Put new node in V1 state[nxtnod[$]] = 1;

32 32 If selected XY for V1, throw out YX the double edge if (head[$] == nodetail && tail[$] == nodehead) then state[$] = 0; endif; Throw out edges with head the same as one picked if not in V1. if (head[$] == nodehead && state[$] != 1) then state[$] = 0; endif;

33 33 Get new candidates. if (state[$] == 3 && nodehead == tail[$]) then state[$] = 2; endif; Must clear when done for next iteration. nxtnod[$] = FALSE; End of loop. endwhile xx;

34 34 All state 1 edges are in the final minimum spanning tree if (state[$] == 1) then result[$] = TRUE; endif; Print results. Edges are in no particular order. print tail[$] head[$] weight[$] in result[$]; That’s all folks! end;

35 35 ASC-MST Algorithm Analysis Each step in this algorithm takes constant time. One MST edge is selected during each pass through the loop in this algorithm. Since a spanning tree has n-1 edges, the running time of this algorithm is O(n) and its cost is O(n 2 ), both of which are optimal. Moreover, there is no overhead devoted to maintaining PE communication or manipulating data structures.

36 Teaching Idea: Have students trace with an Excel spreadsheet a run of an ASC algorithm in class. 36 PETAILHEADWEIGHTSTATEXXNXTNODEGRAPH 1ab21 2ba21 3ac41 4ca41 5ef31 6fe31 7ce11 8ec11 9 10

37 Associative Computing vs Other Styles ASC programs are often as easy to write as the algorithm description. However, a sequential or MIMD program of an algorithm often can obscure the simplicity of the algorithm because various data structures need to be explored to hold the data for the problem. Data structuring in ASC is very simplistic. 37

38 Associative Computing vs Other Styles With associative computing, –Pointers are not needed –Searching replaces sorting –Race conditions can’t occur –Programs are deterministic –Deadlock can’t occur –Solutions scale easily –No synchronization code is needed. –No load balancing is needed –…–… –…… 38

39  Biggest one- no manufacturer is building associative computers although they existed in the past.  People don’t believe the code is doing parallel things as it looks like sequential code.  Although research at Kent has shown otherwise, many believe MIMDs are more efficient on more problems than SIMDs even though communication costs are ignored. 39 What Are the Main Drawbacks of Associative Computing?

40  People seem to believe it is difficult to program an associative computer.  In truth, it does require thinking about parallel solutions in a different way than when you program a MIMD.  However, that way of thinking can be learned and is easy to apply after a few programs are written. 40 What Are the Main Drawbacks of Associative Computing?

41 A Story Many remember Grace Hopper’s nanosecond talks, but I remember even more fondly her parallel talk – one that inspired me to study that field. She had us assume we had a problem that wouldn’t run on today’s machines and compared the situation to a farmer plowing a field and couldn’t complete the job with an ox that couldn’t handle the weight of the plow. 41

42 A Story ( Continued) She asked, “What do you do? Do you build a bigger ox or do you yoke several oxen together to share the load.” Although not explicitly stated, she was thinking of a SIMD situation where all ox pulled in a synchronous fashion, not of a MIMD situation where all ox went their own way. Personally, I think Grace Hopper was totally right! Thank you- Questions? 42


Download ppt "The Language and Algorithms By Dr. Oberta Slotterbeck Computer Science Professor Emerita Hiram College ASC Associative Computing."

Similar presentations


Ads by Google