Pluggable Architecture for Java HPC Messaging

Slides:



Advertisements
Similar presentations
CoMPI: Enhancing MPI based applications performance and scalability using run-time compression. Rosa Filgueira, David E.Singh, Alejandro Calderón and Jesús.
Advertisements

A Scalable Virtual Registry Service for jGMA Matthew Grove CCGRID WIP May 2005.
Aamir Shafi, Bryan Carpenter, Mark Baker
Programming Parallel Hardware using MPJ Express
The road to reliable, autonomous distributed systems
Notes to the presenter. I would like to thank Jim Waldo, Jon Bostrom, and Dennis Govoni. They helped me put this presentation together for the field.
A Comparative Study of Network Protocols & Interconnect for Cluster Computing Performance Evaluation of Fast Ethernet, Gigabit Ethernet and Myrinet.
Portability Issues. The MPI standard was defined in May of This standardization effort was a response to the many incompatible versions of parallel.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
© 2007 Pearson Education Inc., Upper Saddle River, NJ. All rights reserved.1 Computer Networks and Internets with Internet Applications, 4e By Douglas.
High Performance Communication using MPJ Express 1 Presented by Jawad Manzoor National University of Sciences and Technology, Pakistan 29 June 2015.
Parallel Programming with Java
GSI Operating Software – Migration OpenVMS to Linux Ralf Huhmann PCaPAC 2008 October 20, 2008.
Optimizing Threaded MPI Execution on SMP Clusters Hong Tang and Tao Yang Department of Computer Science University of California, Santa Barbara.
JavaGrande Forum: An Overview Vladimir Getov University of Westminster.
Oracle8 JDBC Drivers Section 2. Common Features of Oracle JDBC Drivers The server-side and client-side Oracle JDBC drivers provide the same basic functionality.
1 Developing Native Device for MPJ Express Advisor: Dr. Aamir Shafi Co-advisor: Ms Samin Khaliq.
The MPC Parallel Computer Hardware, Low-level Protocols and Performances University P. & M. Curie (PARIS) LIP6 laboratory Olivier Glück.
Crossing The Line: Distributed Computing Across Network and Filesystem Boundaries.
G-JavaMPI: A Grid Middleware for Distributed Java Computing with MPI Binding and Process Migration Supports Lin Chen, Cho-Li Wang, Francis C. M. Lau and.
Swapping to Remote Memory over InfiniBand: An Approach using a High Performance Network Block Device Shuang LiangRanjit NoronhaDhabaleswar K. Panda IEEE.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
MPJ Express Alon Vice Ayal Ofaim. Contributors 2 Aamir Shafi Jawad Manzoor Kamran Hamid Mohsan Jameel Rizwan Hanif Amjad Aziz Bryan Carpenter Mark Baker.
04 June Thoughts on a Java Reference Implementation for MPJ Mark Baker *, Bryan Carpenter  * University of Portsmouth  Florida.
PMI: A Scalable Process- Management Interface for Extreme-Scale Systems Pavan Balaji, Darius Buntinas, David Goodell, William Gropp, Jayesh Krishna, Ewing.
A Scalable Virtual Registry Service for jGMA Matthew Grove DSG Seminar 3 rd May 2005.
13/11/98Java Grande Forum1 MPI for Java Bryan Carpenter, Vladimir Getov, Glenn Judd, Tony Skjellum, Geoffrey Fox and others.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
09/03/2003Parrallel Computing Conference JToe : a Java API for Object Exchange Serge Chaumette, Pascal Grange, Benoit Métrot, Pierre Vignéras LaBRI,
Programming Parallel Hardware using MPJ Express By A. Shafi.
Sung-Dong Kim, Dept. of Computer Engineering, Hansung University Java - Introduction.
Group Members Hamza Zahid (131391) Fahad Nadeem khan Abdual Hannan AIR UNIVERSITY MULTAN CAMPUS.
Introduction to Operating Systems Concepts
Outline Introduction and motivation, The architecture of Tycho,
GridOS: Operating System Services for Grid Architectures
Applied Operating System Concepts
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Client-Server Communication
Distributed Computing
Credits: 3 CIE: 50 Marks SEE:100 Marks Lab: Embedded and IOT Lab
GWE Core Grid Wizard Enterprise (
Distribution and components
Grid Computing.
Containers in HPC By Raja.
Chapter 3: Windows7 Part 1.
Knowledge Byte In this section, you will learn about:
MPJ: The second generation ‘MPI for Java’
CSE 451: Operating Systems Winter 2006 Module 20 Remote Procedure Call (RPC) Ed Lazowska Allen Center
MPI-Message Passing Interface
Developing a Scalable Coherent Interface (SCI) device for MPJ Express
Objective Understand the concepts of modern operating systems by investigating the most popular operating system in the current and future market Provide.
MPJ (Message Passing in Java): The past, present, and future
CSE 451: Operating Systems Autumn 2003 Lecture 16 RPC
CS703 - Advanced Operating Systems
Aamir Shafi MPJ Express: An Implementation of Message Passing Interface (MPI) in Java Aamir Shafi.
CSE 451: Operating Systems Winter 2007 Module 20 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Operating System Concepts
Fast Communication and User Level Parallelism
SCTP-based Middleware for MPI
CSE 451: Operating Systems Winter 2004 Module 19 Remote Procedure Call (RPC) Ed Lazowska Allen Center
CSE 451: Operating Systems Spring 2012 Module 22 Remote Procedure Call (RPC) Ed Lazowska Allen Center
MPJ: A Java-based Parallel Computing System
Outline Chapter 2 (cont) OS Design OS structure
Prof. Leonardo Mostarda University of Camerino
Chapter 4: Threads & Concurrency
Parallel programming in Java
CSE 451: Operating Systems Winter 2003 Lecture 16 RPC
Objective Understand the concepts of modern operating systems by investigating the most popular operating system in the current and future market Provide.
Operating System Concepts
CSE 451: Operating Systems Messaging and Remote Procedure Call (RPC)
Presentation transcript:

Pluggable Architecture for Java HPC Messaging Mark Baker, Bryan Carpenter*, Aamir Shafi Distributed Systems Group University of Portsmouth http://dsg.port.ac.uk * OMII, Southampton

Presentation Outline Introduction, Design and Implementation of MPJ, Preliminary Performance Evaluation, Conclusion. November 21, 2018

Introduction MPI was introduced in June 1994 as a standard message passing API for parallel scientific computing: Language bindings for C, C++, and Fortran, ‘Java Grande Message Passing Workgroup’ defined Java bindings in 98, Previous efforts follow two approaches: Pure Java approach: Remote Method Invocation (RMI), Sockets, JNI approach. Outline the project November 21, 2018

Pure Java Approach RMI: Java Sockets: Communication performance: Meant for client server applications, Java Sockets: Java New I/O package: Adds non-blocking I/O to the Java language, Direct Buffers: Allocated in the native OS memory and the JVM attempts to provide faster I/O, Communication performance: Comparison of Java NIO and C Netpipe (a Ping-Pong benchmark) drivers, Java performance similar to C on Fast Ethernet: A naïve comparison. Latency: ~125 microseconds, Throughput: ~90 Mbps. November 21, 2018

JNI Approach Importance of JNI cannot be ignored: Where Java fails, JNI makes it work, Advances in HPC communication hardware have continued: Network latency has been reduced to a couple of microseconds, ‘Pure Java’ is not a universal solution: In the presence of Myrinet, no application user would opt for Fast Ethernet, Cons: Not in spirit of Java philosophy ‘write once, run anywhere’. November 21, 2018

The Problem For Java messaging: There is no ‘one size fits all’ approach, Portability and high performance are often contradictory requirements: Portability: Pure Java, High Performance: JNI, The choice between portability and high performance should best be left to application users, The challenging issue is how to manage these contradictory requirements: How to provide a flexible mechanism to help applications swap communication protocols? November 21, 2018

Presentation Outline Introduction Design and Implementation Preliminary Performance Evaluation Conclusion November 21, 2018

Design Aims: Two device levels: Support swapping various communication devices, Two device levels: The MPJ Device level (mpjdev): Separates native MPI-2 device from all other devices, ‘native MPI-2’ device is a special case: Possible to cut through and make use of native implementation of advanced MPI features, The xdev Device level (xdev): ‘gmdev’ – xdev based on GM 2.x comms library, ‘niodev’ – xdev based on Java NIO API, ‘smpdev’ – xdev based on Threads API. November 21, 2018

Design November 21, 2018

Implementation MPJ complies with functionality of MPI-1.2: Point to point communications, Collective communications, Groups, communicators, and contexts, Derived datatypes: Buffering API, Runtime infrastructure: Allows bootstrapping MPJ processes, MPJ Daemon could be installed as services, Communication protocols: Java NIO device, GM 2.x.x device (Myrinet), Shared memory device (using Threads API), Native MPI-2 device. November 21, 2018

Presentation Outline Introduction Design and Implementation Preliminary Performance Evaluation Conclusion November 21, 2018

Preliminary Performance Evaluation Point-to-point (connected by Fast Ethernet): mpiJava 1.2.5 (using MPICH 1.2.5), MPJ (using Java NIO), MPICH (1.2.5) (using ch_p4), LAM/MPI (7.0.9) (using TCP RPI), Transfer time and throughput graphs, Analysis. November 21, 2018

Transfer Time Comparison MPJ: ~250 microseconds (latency) mpiJava, LAM/MPI, MPICH: ~125 microseconds (latency) November 21, 2018

Throughput Comparison MPJ: ~81 Mbps mpiJava: ~84 Mbps LAM/MPI: ~90 Mbps MPICH: ~88 Mbps November 21, 2018

Analysis General behaviour is similar to other MPI implementations, Optimisation areas: Latency for small messages: Currently control message and data is written in two separate SocketChannel write operations, Large messages: Maintaining pool of buffers, Understand the anamoly at 16M datapoint. November 21, 2018

Presentation Outline Introduction Design and Implementation Preliminary Performance Evaluation Conclusion November 21, 2018

Summary The key issue for Java messaging is not debating pure Java or JNI approach: But, providing a flexible mechanism to swap various comm protocols, MPJ has a pluggable architecture: We are implementing ‘niodev’, ‘gmdev’, ‘smpdev’, and native MPI-2 device, MPJ runtime infrastructure allows bootstrapping MPI processes across various platforms. November 21, 2018

Conclusions MPJ is the second generation ‘MPI for Java’ Current Status: Unit testing, Optimization, Initial version of MPJ follows the same API as mpiJava (and is intended to supersede mpiJava): The parallel applications built on top of mpiJava will work with MPJ, There are some minor omissions: Bsend, and explicit packing/unpacking -- see release docs for more details, Arguably, the first “full” MPI library for Java providing a pure Java implementation. November 21, 2018

Questions November 21, 2018