Message Passing Interface In Java for AgentTeamwork (MPJ) By Zhiji Huang Advisor: Professor Munehiro Fukuda 2005.

Slides:



Advertisements
Similar presentations
1 What is message passing? l Data transfer plus synchronization l Requires cooperation of sender and receiver l Cooperation not always apparent in code.
Advertisements

MPI Message Passing Interface Portable Parallel Programs.
MPI Message Passing Interface
3.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Process An operating system executes a variety of programs: Batch system.
Μπ A Scalable & Transparent System for Simulating MPI Programs Kalyan S. Perumalla, Ph.D. Senior R&D Manager Oak Ridge National Laboratory Adjunct Professor.
1 Non-Blocking Communications. 2 #include int main(int argc, char **argv) { int my_rank, ncpus; int left_neighbor, right_neighbor; int data_received=-1;
Winter, 2004CSS490 MPI1 CSS490 Group Communication and MPI Textbook Ch3 Instructor: Munehiro Fukuda These slides were compiled from the course textbook,
MPI Collective Communications
CS 140: Models of parallel programming: Distributed memory and MPI.
Protocols and software for exploiting Myrinet clusters Congduc Pham and the main contributors P. Geoffray, L. Prylli, B. Tourancheau, R. Westrelin.
1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
Tam Vu Remote Procedure Call CISC 879 – Spring 03 Tam Vu March 06, 03.
Exception Handling Introduction Exception handling is a mechanism to handle exceptions. Exceptions are error like situations. It is difficult to decide.
George Blank University Lecturer. CS 602 Java and the Web Object Oriented Software Development Using Java Chapter 4.
Reference: Message Passing Fundamentals.
12/20/2005AgentTeamwork1 AgentTeamwork: Mobile-Agent-Based Middleware for Distributed Job Coordination Munehiro Fukuda Computing & Software Systems, University.
Processes CSCI 444/544 Operating Systems Fall 2008.
CS 240A: Models of parallel programming: Distributed memory and MPI.
CS490T Advanced Tablet Platform Applications Network Programming Evolution.
2: OS Structures 1 Jerry Breecher OPERATING SYSTEMS STRUCTURES.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
Implementation of XML Database and Enhancement of Resource and Sensor Agents Cuong Ngo CSS497 Summer 2006 Professor Munehiro Fukuda.
© Lethbridge/Laganière 2001 Chap. 3: Basing Development on Reusable Technology 1 Let’s get started. Let’s start by selecting an architecture from among.
High Performance Communication using MPJ Express 1 Presented by Jawad Manzoor National University of Sciences and Technology, Pakistan 29 June 2015.
5/25/2006CSS Speaker Series1 Parallel Job Deployment and Monitoring in a Hierarchy of Mobile Agents Munehiro Fukuda Computing & Software Systems, University.
Company LOGO Development of Resource/Commander Agents For AgentTeamwork Grid Computing Middleware Funded By Prepared By Enoch Mak Spring 2005.
Inter-cluster Job Deployment by AgentTeamwork Sentinel Agents Emory Horvath CSS497 Spring 2006 Advisor: Dr. Munehiro Fukuda.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 2: Operating-System Structures Modified from the text book.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
Parallel Programming with Java
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
Parallel Processing LAB NO 1.
1 Developing Native Device for MPJ Express Advisor: Dr. Aamir Shafi Co-advisor: Ms Samin Khaliq.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Parallel Programming Dr Andy Evans. Parallel programming Various options, but a popular one is the Message Passing Interface (MPI). This is a standard.
Establishing communication with Envirobat using TCP/IP Presented by Apourva Parthasarathy Date : 18/06/13.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Performance Oriented MPI Jeffrey M. Squyres Andrew Lumsdaine NERSC/LBNL and U. Notre Dame.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
8/25/2005IEEE PacRim The Design Concept and Initial Implementation of AgentTeamwork Grid Computing Middleware Munehiro Fukuda Computing & Software.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Honeywell Displays Testing Ryan Hernandez Matt Lombardo Jeremy Pager Mike Santa Cruz Brad Simons.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
Bronis R. de Supinski and Jeffrey S. Vetter Center for Applied Scientific Computing August 15, 2000 Umpire: Making MPI Programs Safe.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
I/O Software CS 537 – Introduction to Operating Systems.
8/25/2005IEEE PacRim The Check-Pointed and Error-Recoverable MPI Java of AgentTeamwork Grid Computing Middleware Munehiro Fukuda and Zhiji Huang.
Review A program is… a set of instructions that tell a computer what to do. Programs can also be called… software. Hardware refers to… the physical components.
Programming Parallel Hardware using MPJ Express By A. Shafi.
Distributed Computing & Embedded Systems Chapter 4: Remote Method Invocation Dr. Umair Ali Khan.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
 It is a pure oops language and a high level language.  It was developed at sun microsystems by James Gosling.
PVM and MPI.
Operating Systems {week 01.b}
Agent Teamwork Research Assistant
Distributed Shared Memory
MPI Message Passing Interface
Lecture 1 Runtime environments.
Java MPI in MATLAB*P Max Goldman Da Guo.
More on MPI Nonblocking point-to-point routines Deadlock
Operating System Concepts
More on MPI Nonblocking point-to-point routines Deadlock
MPJ: A Java-based Parallel Computing System
Prof. Leonardo Mostarda University of Camerino
Lecture 1 Runtime environments.
Parallel Processing - MPI
MPI Message Passing Interface
Presentation transcript:

Message Passing Interface In Java for AgentTeamwork (MPJ) By Zhiji Huang Advisor: Professor Munehiro Fukuda 2005

AgentTeamwork User requests AgentTeamwork for some computing nodes. AgentTeamworking manages the resources for performance and fault tolerance automatically. AgentTeamwork

AgentTeamwork Layers User applications in Java mpiJava API User Program Wrapper mpiJavaSocketmpiJavaAteam Java SocketGridTcp AgentTeamwork Java Virtual Machine Operating Systems Hardware

AgentTeamwork

GridTCP Extends TCP by adding message saving and check-pointing features. Automatically saves messages. Provides check-pointing, or snapshots of program execution. Ultimately allows programs to recover from errors.  Node crashes, etc.

GridTcp public class MyApplication { public GridIpEntry ipEntry[]; // used by the GridTcp socket library public int funcId; // used by the user program wrapper public GridTcp tcp; // the GridTcp error-recoverable socket public int nprocess; // #processors public int myRank; // processor id ( or mpi rank) public int func_0( String args[] ) { // constructor MPJ.Init( args, ipEntry, tcp ); // invoke mpiJava-A.....; // more statements to be inserted return 1; // calls func_1( ) } public int func_1( ) { // called from func_0 if ( MPJ.COMM_WORLD.Rank( ) == 0 ) MPJ.COMM_WORLD.Send(... ); else MPJ.COMM_WORLD.Recv(... );.....; // more statements to be inserted return 2; // calls func_2( ) } public int func_2( ) { // called from func_2, the last function.....; // more statements to be inserted MPJ.finalize( ); // stops mpiJava-A return -2; // application terminated }

Message Passing Interface API that facilitates communications (or message passing) for distributed programs. Usually exists for FORTRAN, C/C++, Java. Current implementations in Java are actually Java wrappers around native C code.  Disadvantages with portability and is not suitable to concept of AgentTeamwork. P0P1 P2 P3 User Program. SPMD MPI

Message Passing Interface – Basic Functions Init() Send()/Recv() Bcast() Gather()

MPJ – mpiJavaS & mpiJavaA MPJ Communicator JavaComm GridComm Datatypes MPJMessage Op etc

MPJ Contains main MPI operations. Call to traditional Init(string[]) initializes Java socket-based connections. Call to Init(string[], IpTable, GridTcp) initializes connections with GridTCP Also provides Rank(), Size(), Finalize(), etc.

Communicator Provides all communications functions. Point to point  Blocking – Send(), IRecv()  NonBlocking – Isend(), Recv() Collective – Gather(), Scatter(), Reduce(), and variants.

JavaComm & GridComm JavaComm:  Java Sockets, SocketServers GridComm:  GridTcp Sockets, GridTcp object, IpTable  And others needed by GridTCP. Both:  InputStreamForRank[]  OutputStreamForRank[]  Allows for socket communications using bytes.  Can use same communications algorithms for both GridComm and JavaComm.  Clean interface between the two layers.

Implementation Notes - Performance Creation of Java byte arrays/buffers very expensive. Greatly reduces performance. One solution: use permanent buffers for serialization  byte buffer[64k]  Serialize into buffer until full, write buffer, serialize remaining data. Not effective with collective communication algorithms.  Either requires extra byte storage to handle/save serialized data.  Or requires serialization/deserialization at every read/write.

Raw Bandwidth – no serialization (just bytes). Java Socket GridTCP mpiJava-A mpiJava

Serialization – Doubles and other primitives Doubles - only 20% of performance. Other primitives see 25-80% performance. Necessity to “serialize” or turn items into bytes very costly In C/C++  Cast into byte pointer – 1 instruction. In Java  int x; //for just 1 integer byte[] arr[4]; //extra memory cost arr[3] = (byte) ( x ); arr[2] = (byte) ( x >>> 8); //shift, cast, copy arr[1] = (byte) (x >>> 16); //repeat arr[0] = (byte) (x >>> 24); Lots of instructions, extra memory for byte buffer. Cost x2 due to deserialization on other side.

PingPong (send and recv) – Doubles

PingPong - Objects

Bcast – 8 processes Doubles

Bcast – 8 processes Objects

Performance Analysis Raw bandwidth  mpiJavaS comes to about % of maximum Java performance.  mpiJavaA (with checkpointing and error recovery) incurs 20-60% overhead, but still overtakes mpiJava with bigger data segments. Doubles & Objects  When dealing with primitives or objects that need serialization, a 25-50% overhead is incurred. Memory issues related to mpiJavaA – runs out of memory.

Conclusion The next step is to develop a tool to automatically parses a user program into GridTcp functions for best performance. Ultimately, automate user job distribution, management, and error recovery.

A few helpful classes… CSS432 Networking CSS430 Operating Systems CSS360 Software Engineering CSS422 Hardware CSS343 Data Structures & Algorithms

MPJ – mpiJavaS & mpiJavaA Questions?