Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pluggable Architecture for Java HPC Messaging

Similar presentations


Presentation on theme: "Pluggable Architecture for Java HPC Messaging"— Presentation transcript:

1 Pluggable Architecture for Java HPC Messaging
Mark Baker, Bryan Carpenter*, Aamir Shafi Distributed Systems Group University of Portsmouth * OMII, Southampton

2 Presentation Outline Introduction, Design and Implementation of MPJ,
Preliminary Performance Evaluation, Conclusion. November 21, 2018

3 Introduction MPI was introduced in June 1994 as a standard message passing API for parallel scientific computing: Language bindings for C, C++, and Fortran, ‘Java Grande Message Passing Workgroup’ defined Java bindings in 98, Previous efforts follow two approaches: Pure Java approach: Remote Method Invocation (RMI), Sockets, JNI approach. Outline the project November 21, 2018

4 Pure Java Approach RMI: Java Sockets: Communication performance:
Meant for client server applications, Java Sockets: Java New I/O package: Adds non-blocking I/O to the Java language, Direct Buffers: Allocated in the native OS memory and the JVM attempts to provide faster I/O, Communication performance: Comparison of Java NIO and C Netpipe (a Ping-Pong benchmark) drivers, Java performance similar to C on Fast Ethernet: A naïve comparison. Latency: ~125 microseconds, Throughput: ~90 Mbps. November 21, 2018

5 JNI Approach Importance of JNI cannot be ignored:
Where Java fails, JNI makes it work, Advances in HPC communication hardware have continued: Network latency has been reduced to a couple of microseconds, ‘Pure Java’ is not a universal solution: In the presence of Myrinet, no application user would opt for Fast Ethernet, Cons: Not in spirit of Java philosophy ‘write once, run anywhere’. November 21, 2018

6 The Problem For Java messaging:
There is no ‘one size fits all’ approach, Portability and high performance are often contradictory requirements: Portability: Pure Java, High Performance: JNI, The choice between portability and high performance should best be left to application users, The challenging issue is how to manage these contradictory requirements: How to provide a flexible mechanism to help applications swap communication protocols? November 21, 2018

7 Presentation Outline Introduction Design and Implementation
Preliminary Performance Evaluation Conclusion November 21, 2018

8 Design Aims: Two device levels:
Support swapping various communication devices, Two device levels: The MPJ Device level (mpjdev): Separates native MPI-2 device from all other devices, ‘native MPI-2’ device is a special case: Possible to cut through and make use of native implementation of advanced MPI features, The xdev Device level (xdev): ‘gmdev’ – xdev based on GM 2.x comms library, ‘niodev’ – xdev based on Java NIO API, ‘smpdev’ – xdev based on Threads API. November 21, 2018

9 Design November 21, 2018

10 Implementation MPJ complies with functionality of MPI-1.2:
Point to point communications, Collective communications, Groups, communicators, and contexts, Derived datatypes: Buffering API, Runtime infrastructure: Allows bootstrapping MPJ processes, MPJ Daemon could be installed as services, Communication protocols: Java NIO device, GM 2.x.x device (Myrinet), Shared memory device (using Threads API), Native MPI-2 device. November 21, 2018

11 Presentation Outline Introduction Design and Implementation
Preliminary Performance Evaluation Conclusion November 21, 2018

12 Preliminary Performance Evaluation
Point-to-point (connected by Fast Ethernet): mpiJava (using MPICH 1.2.5), MPJ (using Java NIO), MPICH (1.2.5) (using ch_p4), LAM/MPI (7.0.9) (using TCP RPI), Transfer time and throughput graphs, Analysis. November 21, 2018

13 Transfer Time Comparison
MPJ: ~250 microseconds (latency) mpiJava, LAM/MPI, MPICH: ~125 microseconds (latency) November 21, 2018

14 Throughput Comparison
MPJ: ~81 Mbps mpiJava: ~84 Mbps LAM/MPI: ~90 Mbps MPICH: ~88 Mbps November 21, 2018

15 Analysis General behaviour is similar to other MPI implementations,
Optimisation areas: Latency for small messages: Currently control message and data is written in two separate SocketChannel write operations, Large messages: Maintaining pool of buffers, Understand the anamoly at 16M datapoint. November 21, 2018

16 Presentation Outline Introduction Design and Implementation
Preliminary Performance Evaluation Conclusion November 21, 2018

17 Summary The key issue for Java messaging is not debating pure Java or JNI approach: But, providing a flexible mechanism to swap various comm protocols, MPJ has a pluggable architecture: We are implementing ‘niodev’, ‘gmdev’, ‘smpdev’, and native MPI-2 device, MPJ runtime infrastructure allows bootstrapping MPI processes across various platforms. November 21, 2018

18 Conclusions MPJ is the second generation ‘MPI for Java’
Current Status: Unit testing, Optimization, Initial version of MPJ follows the same API as mpiJava (and is intended to supersede mpiJava): The parallel applications built on top of mpiJava will work with MPJ, There are some minor omissions: Bsend, and explicit packing/unpacking -- see release docs for more details, Arguably, the first “full” MPI library for Java providing a pure Java implementation. November 21, 2018

19 Questions November 21, 2018


Download ppt "Pluggable Architecture for Java HPC Messaging"

Similar presentations


Ads by Google