Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 22 2005 Session 12.

Slides:



Advertisements
Similar presentations
Parallel Virtual Machine Rama Vykunta. Introduction n PVM provides a unified frame work for developing parallel programs with the existing infrastructure.
Advertisements

MPI Message Passing Interface
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Decision Trees and MPI Collective Algorithm Selection Problem Jelena Pje¡sivac-Grbovi´c,Graham E. Fagg, Thara Angskun, George Bosilca, and Jack J. Dongarra,
Chess Problem Solver Solves a given chess position for checkmate Problem input in text format.
Network Operating Systems Users are aware of multiplicity of machines. Access to resources of various machines is done explicitly by: –Logging into the.
Point-to-Point Communication Self Test with solution.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
1 Computer System Overview OS-1 Course AA
1 Process Description and Control Chapter 3. 2 Process Management—Fundamental task of an OS The OS is responsible for: Allocation of resources to processes.
3.5 Interprocess Communication
CSE 160 – Lecture 10 Programs 1 and 2. Program 1 Write a “launcher” program to specify exactly where programs are to be spawned, gather output, clean.
Introduction to PVM PVM (Parallel Virtual Machine) is a package of libraries and runtime daemons that enables building parallel apps easily and efficiently.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
20101 Chapter 7 The Application Layer Message Passing.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Collective Communication
Chapter 9 Message Passing Copyright © Operating Systems, by Dhananjay Dhamdhere Copyright © Operating Systems, by Dhananjay Dhamdhere2 Introduction.
PVM. PVM - What Is It? F Stands for: Parallel Virtual Machine F A software tool used to create and execute concurrent or parallel applications. F Operates.
Message passing model. buffer producerconsumer PRODUCER-CONSUMER PROBLEM.
Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Operating Systems Part III: Process Management (Process States and Transitions)
- 1 - Embedded Systems - SDL Some general properties of languages 1. Synchronous vs. asynchronous languages Description of several processes in many languages.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11.
SOFTWARE DESIGN. INTRODUCTION There are 3 distinct types of activities in design 1.External design 2.Architectural design 3.Detailed design Architectural.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Parallel and Distributed Programming Kashif Bilal.
PVM (Parallel Virtual Machine)‏ By : Vishal Prajapati Course CS683 Computer Architecture Prof. Moreshwar R Bhujade.
PVM: Parallel Virtual Machine anonymous ftp ftp ftp.netlib.org cd pvm3/book get pvm-book.ps quit
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
1 Chapter 9 Distributed Shared Memory. 2 Making the main memory of a cluster of computers look as though it is a single memory with a single address space.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 March 20, 2008 Session 9.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 8 October 23, 2002 Nayda G. Santiago.
Parallel Programming with PVM Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Processes. Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems.
Computer Science and Engineering Advanced Computer Architecture CSE 8383 April 24, 2008 Session 12.
Lecture 5: Parallel Virtual Machine (PVM). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
1 Lecture 19: Unix signals and Terminal management n what is a signal n signal handling u kernel u user n signal generation n signal example usage n terminal.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Process by Dr. Amin Danial Asham. References Operating System Concepts ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, and GREG GAGNE.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 7, 2005 Session 23.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 6, 2006 Session 22.
Message passing model. buffer producerconsumer PRODUCER-CONSUMER PROBLEM.
Parallel Virtual Machine
Prabhaker Mateti Wright State University
Computer Engg, IIT(BHU)
MPI Message Passing Interface
Subject Name: OPERATING SYSTEMS Subject Code: 10EC65
Message Passing Libraries
Programming Parallel Computers
Presentation transcript:

Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12

Computer Science and Engineering Parallel Virtual Machine (PVM)  Review  Communication  Synchronization  Reduction operations  Work Assignments

Computer Science and Engineering Main Constructs in PVM  Task Creation  Communication  Synchronization  Others

Computer Science and Engineering PVM Software  Two Components:  Library of PVM routines  Daemon

Computer Science and Engineering PVM Application  A number of sequential programs, each of which will correspond to one or more processes in a parallel program

Computer Science and Engineering Application Structure  Start graph  Tree

Computer Science and Engineering To Create a child, you must specify: 1. The machine on which the child will be started 2. A path to the executable file on the specified machine 3. The number of copies of the child to be created 4. An array of arguments to the child tasks

Computer Science and Engineering Pvm_spawn num = pvm_spawn(child, arguments, flag, where, howmany, &tids)

Computer Science and Engineering Communication among Tasks User application Library Daemon User application Library Daemon Sending TaskReceiving Task

Computer Science and Engineering Standard PVM asynchronous communication  A sending task issues a send command (point 1)  The message is transferred to the daemon (point 2)  Control is returned to the user application (points 3 & 4)  The daemon will transmit the message on the physical wire sometime after returning control to the user application (point 3)

Computer Science and Engineering Standard PVM asynchronous communication (cont.) The receiving task issues a receive command (point 5) at some other time In the case of a blocking receive, the receiving task blocks on the daemon waiting for a message (point 6). After the message arrives, control is returned to the user application (points 7 & 8) In the case of a non-blocking receive, control is returned to the user application immediately (points 7 & 8)

Computer Science and Engineering Send (3 steps) 1.A send buffer must be initialized 2.The message is packed into the buffer 3.The completed message is sent to its destination(s)

Computer Science and Engineering Receive (2 steps) 1.The message is received 2.The received items are unpacked

Computer Science and Engineering Message Buffers Buffer Creation (before packing) Bufid = pvm_initsend(encoding_option) Bufid = pvm_mkbuf(encoding_option) Encoding optionMeaning 0XDR 1No encoding 2Leave data in place

Computer Science and Engineering Message Buffers (cont.) Data Packing pvm_pk*() pvm_pkstr() – one argument pvm_pkstr(“This is my data”); Others – three arguments 1. Pointer to the first item 2. Number of items to be packed 3. Stride pvm_pkint(my_array, n, 1); Packing functions can be called multiple times to pack data into a single message

Computer Science and Engineering Sending a message Point to point (one receiver) info = pvm_send(tid, tag) broadcast (multiple receivers) info = pvm_mcast(tids, n, tag) info = pvm_bcast(group_name, tag) Pack and Send (one step) info = pvm_psend(tid, tag, my_array, length, data type)

Computer Science and Engineering Receiving a message Blocking bufid = pvm_recv(tid, tag) -1  wild card in either tid or tag Nonblocking bufid = pvm_nrecv(tid, tag) bufid = 0 (no message was received) Timeout bufid = pvm_trecv(tid, tag, timeout) bufid = 0 (no message was received)

Computer Science and Engineering Different Receive in PVM Pvm_recv() wait Time Funciton is called Time is expired Message arrival Blocking Pvm_nrecv() Continue execution Non-blocking Pvm_trecv() wait Timeout Resume execution

Computer Science and Engineering Data unpacking pvm_upk*() pvm_upkstr() – one argument pvm_upkstr(string); Others – three arguments 1. Pointer to the first item 2. Number of items to be unpacked 3. Stride pvm_upkint(my_array, n, 1);

Computer Science and Engineering Task Synchronization  Synchronization constructs are used to force a certain order of execution among the activities in a parallel program.  Synchronization Constructs  Blocking Receive  Barriers

Computer Science and Engineering Blocking Receive pvm_recv(100,tag)g()f()pvm_send(200,tag) T0 TID = 100 T1 TID = 200 g() in T1 is not executed until f() in T0 has finished

Computer Science and Engineering Group Barrier in PVM pvm_barrier(“slave”, 3) proceed wait pvm_barrier(“slave”, 3) proceed wait pvm_barrier(“slave”, 3) proceed T2T0T1 Group: slave Synchronization Point

Computer Science and Engineering Reduction Operation info = pvm_reduce(func, data, n, datatype, tag, group_name, root) Example info = pvm_reduce(PvmSum, dataarray, 5, PVM_INT, tag, “slave”, root) T010,5,20,8,3010,5,20,8,30 T1(root)2,15,4,12,620,45,30,30,50 T28,25,6,10,148,25,6,10,14 Before reductionafter reduction

Computer Science and Engineering Work Assignment (different programs) info1 = pvm_spawn(“/user/rewini/worker1”, 0, 1, “lpc01”, 1, &tid1) info2 = pvm_spawn(“/user/rewini/worker2”, 0, 1, “lpc02”, 1, &tid2) info3 = pvm_spawn(“/user/rewini/worker3”, 0, 1, “lpc03”, 1, &tid3) info4 = pvm_spawn(“/user/rewini/worker4”, 0, 1, “lpc04”, 1, &tid4)

Computer Science and Engineering Work Assignment (Same Program) If we know that the IDs are 1, 2,.., n-1 Switch (my_id) { case 1: /* Work assigned to the worker whose id number is 1 */ break; case 2: /* Work assigned to the worker whose id number is 2 */ break; … case n-1: /* Work assigned to the worker whose id number is n-1 */ break; default:;} /* end switch */

Computer Science and Engineering Using task ID array to get my_id  The supervisor sends an array containing the TIDs of all the tasks to all the workers.  The supervisor’s TID is saved in the zero element of the array and the workers are saved in elements 1 to n-1.  Each worker searchers for its own TID and the index can be used to identify the corresponding worker.

Computer Science and Engineering Using task groups to get my_id  All the tasks join one group and the instance numbers are used as the new task identifiers.  The supervisor is the first one to join the group and gets instance number 0.  The workers get instance numbers in the range from 1 to n-1