CS 584 Lecture 17 n Assignment? n C* program n Papers n Test n When?

Slides:



Advertisements
Similar presentations
Introduction to C Programming
Advertisements

Parallel Virtual Machine Rama Vykunta. Introduction n PVM provides a unified frame work for developing parallel programs with the existing infrastructure.
CS 11 C track: lecture 7 Last week: structs, typedef, linked lists This week: hash tables more on the C preprocessor extern const.
GPU programming: CUDA Acknowledgement: the lecture materials are based on the materials in NVIDIA teaching center CUDA course materials, including materials.
Chess Problem Solver Solves a given chess position for checkmate Problem input in text format.
Scientific Programming OpenM ulti- P rocessing M essage P assing I nterface.
March R. Smith - University of St Thomas - Minnesota QMCS 130: Today’s Class Exam StatusExam Status Recap of Lab 7 ExampleRecap of Lab 7 Example.
CS-502 Fall 2006Project 1, Fork1 Programming Project 1 – Fork.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
CS470 Lab3 TA Notes. Step 1 This is just a simple program that reads in from the command line the time the program should run.
CS 584 Lecture 16 n Assignment -- Due Friday n C* program n Paper reviews.
1 CS 201 Array Debzani Deb. 2 Having trouble linking math.h? Link with the following option gcc –lm –o test test.o.
Chapter 7 Managing Data Sources. ASP.NET 2.0, Third Edition2.
1 Chapter 20 — Creating Web Projects Microsoft Visual Basic.NET, Introduction to Programming.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
Forms, Validation Week 7 INFM 603. Announcements Try placing today’s example in htdocs (XAMPP). This will allow you to execute examples that rely on PHP.
DAT602 Database Application Development Lecture 15 Java Server Pages Part 1.
1 Chapter 9 Spaces with LINDA. 2 Linda Linda is an experimental programming concept unlike ADA or Occam which are fully developed production-quality languages.
Using Data Active Server Pages Objectives In this chapter, you will: Learn about variables and constants Explore application and session variables Learn.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Perl Tutorial Presented by Pradeepsunder. Why PERL ???  Practical extraction and report language  Similar to shell script but lot easier and more powerful.
LiveCycle Data Services Introduction Part 2. Part 2? This is the second in our series on LiveCycle Data Services. If you missed our first presentation,
Web Services Week 7 Aims: A detailed look at the underlying mechanisms for communication between web services Objectives: SOAP, WSDL, UDDI.
Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.
Project 1, Command Shell CS-502 (EMC) Fall Programming Project #1 Command Shell CS-502, Operating Systems EMC, Fall 2009 (Slides include materials.
Copyright 2003 Scott/Jones Publishing Standard Version of Starting Out with C++, 4th Edition Chapter 6 Functions.
Creating Dynamic Web Pages Using PHP and MySQL CS 320.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
CS212: Object Oriented Analysis and Design Lecture 7: Arrays, Pointers and Dynamic Memory Allocation.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
CS 584 Lecture 14 n Assignment –Program due now –Check the web »Create program model & do calculations »Additional calculation problems n Paper presentations.
IBM TSpaces Lab 1 Introduction. Summary TSpaces Overview Basic Definitions Basic primitive operations Reading/writing tuples in tuplespace HelloWorld.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
1 Lecture 5 (part2) : “Interprocess communication” n reasons for process cooperation n types of message passing n direct and indirect message passing n.
Coordination Models and Languages Part I: Coordination Languages and Linda Part II: Technologies based on Tuple space concept beyond Linda Part III: Comparison.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11.
CPS4200 Unix Systems Programming Chapter 2. Programs, Processes and Threads A program is a prepared sequence of instructions to accomplish a defined task.
Lab 2 Parallel processing using NIOS II processors
Perl Tutorial. Why PERL ??? Practical extraction and report language Similar to shell script but lot easier and more powerful Easy availablity All details.
Parallel and Distributed Programming Kashif Bilal.
Alternate Version of STARTING OUT WITH C++ 4 th Edition Chapter 6 Functions.
PVM (Parallel Virtual Machine)‏ By : Vishal Prajapati Course CS683 Computer Architecture Prof. Moreshwar R Bhujade.
PVM: Parallel Virtual Machine anonymous ftp ftp ftp.netlib.org cd pvm3/book get pvm-book.ps quit
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Orca A language for parallel programming of distributed systems.
A Few More Functions. One more quoting operator qw// Takes a space separated sequence of words, and returns a list of single-quoted words. –no interpolation.
1 Chapter 5: Defining Classes. 2 Basics of Classes An object is a member of a class type What is a class? Fields & Methods Types of variables: –Instance:
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Computer Organization and Design Pointers, Arrays and Strings in C Montek Singh Sep 18, 2015 Lab 5 supplement.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
1 Chapter 9 Distributed Shared Memory. 2 Making the main memory of a cluster of computers look as though it is a single memory with a single address space.
COP 3813 Intro to Internet Computing Prof. Roy Levow Lecture 4 JavaScript.
Functions Chapter 6. Modular Programming Modular programming: breaking a program up into smaller, manageable functions or modules Function: a collection.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Announcements There is a Quiz today. There were problems with grading assignment 2, but they should be worked out today The web page for correcting the.
1 11/30/05CS150 Introduction to Computer Science 1 Structs.
Dr. Abdullah Almutairi Spring PHP is a server scripting language, and a powerful tool for making dynamic and interactive Web pages. PHP is a widely-used,
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12.
Java Programming Language Lecture27- An Introduction.
Windows Programming Lecture 03. Pointers and Arrays.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
DBW - PHP DBW2017.
Parallel Virtual Machine
CS 584 Lecture 15 Assignment? (Due Friday) Friday paper presentations
CS179G, Project In Computer Science
Coordination Models and Languages
Lecture 9: POSIX Threads Cont.
Presentation transcript:

CS 584 Lecture 17 n Assignment? n C* program n Papers n Test n When?

Glenda Help n Glenda tutorial n Postscript paper n Local configuration n HTML document n Both available from our web page

Glenda n Supports 2 models of parallelism n Agenda n Master-slave n SPMD kindof n Master just spawns jobs and waits for them to finish

Glenda goals n Be Linda n Hide PVM as much as possible n Maintain PVM message passing ability n Maintain PVMs portability

Linda vs. Glenda n Linda has no mechanism to determine a process number. n Linda provides no means to communicate other than tuple space. n Glenda does not provide the eval function n We must spawn the process, and put out a tuple for it to evaluate.

Glenda functions n tid = gl_mytid() n tid = gl_spawn( char *name ) n gl_out( char *key, …..) n gl_in(char *key, …..) similarly gl_inp n gl_rd(char *key, …..) similarly gl_rdp n gl_outto(int tid, char *key, …..) n gl_into(char *key, …..) n gl_exit()

Glenda programming n First join Glenda by calling gl_mytid n Then the master uses gl_spawn to start up as many workers as you need. n One per call n The tid of the worker is returned n gl_outto and gl_into are used to send and receive tuples directly n No Structures, unions, or typedefs

gl_out n Variable argument list n Values can be scalar or arrays n Array sizes are implicit unless declared n Sizes of 2-d arrays must be declared gl_out( "data", j, k, val) gl_out( "row", l, x:len) // declares the length to be len gl_out( "col", x[j]:len) // 2d array

gl_in, gl_inp, gl_rd, and gl_rdp n Similar to gl_out n gl_in blocks gl_inp returns 1 or 0 n The ? is used to indicate variable data n Other arguments used for matching gl_in( "data", j, k, ? val) gl_in( "row", l, ? x:len) // declares the length to be len gl_in( "col", ? x[j]) // 2d array or single element

gl_outto and gl_into n Send a tuple directly to a process n Similar to gl_out and gl_in n gl_out requires the destination id gl_outto( tid, "data", j, k, val) gl_into( "data", j, k, ? val) n If tid is an array, gl_outto is a broadcast. gl_outto( tid : len, "data", j, k, val)

How does Glenda work? n Runs on top of PVM n Special process n Global Tuple Server n Glenda uses a preprocessor to convert a Glenda program into a PVM program

Using Glenda n Add ~snell/glenda/bin to your PATH n Copy the global tuple server to your pvm3/bin/HPPA directory. n Write your Glenda program n Must have a.cg extension n Run cgpp on your program n Compile the result using the C compiler

Using Glenda n Copy your executables to your pvm3/bin/HPPA directory n Start up PVM n Configure your virtual machine n Run gts n Run your master program

Assignment n Redo Lab 1 using Glenda n Compare your speedups and execution times with the previous two labs.