Parallel Programming Dr Andy Evans. Parallel programming Various options, but a popular one is the Message Passing Interface (MPI). This is a standard.

Slides:



Advertisements
Similar presentations
NGS computation services: API's,
Advertisements

MPI Message Passing Interface
MINJAE HWANG THAWAN KOOBURAT CS758 CLASS PROJECT FALL 2009 Extending Task-based Programming Model beyond Shared-memory Systems.
Parallel Processing with OpenMP
Written by: Dr. JJ Shepherd
SELBO Agent Ivan Minov University of Plovdiv “Paisii Hilendarski“
Aamir Shafi, Bryan Carpenter, Mark Baker
MPI and C-Language Seminars Seminar Plan  Week 1 – Introduction, Data Types, Control Flow, Pointers  Week 2 – Arrays, Structures, Enums, I/O,
Reference: Message Passing Fundamentals.
A Grid Parallel Application Framework Jeremy Villalobos PhD student Department of Computer Science University of North Carolina Charlotte.
13 June mpiJava. Related projects mpiJava (Syracuse) JavaMPI (Getov et al, Westminster) JMPI (MPI Software Technology) MPIJ.
A CHAT CLIENT-SERVER MODULE IN JAVA BY MAHTAB M HUSSAIN MAYANK MOHAN ISE 582 FALL 2003 PROJECT.
Scripting Languages For Virtual Worlds. Outline Necessary Features Classes, Prototypes, and Mixins Static vs. Dynamic Typing Concurrency Versioning Distribution.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
Beowulf Cluster Computing Each Computer in the cluster is equipped with: – Intel Core 2 Duo 6400 Processor(Master: Core 2 Duo 6700) – 2 Gigabytes of DDR.
High Performance Communication using MPJ Express 1 Presented by Jawad Manzoor National University of Sciences and Technology, Pakistan 29 June 2015.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
Message Passing Interface In Java for AgentTeamwork (MPJ) By Zhiji Huang Advisor: Professor Munehiro Fukuda 2005.
The hybird approach to programming clusters of multi-core architetures.
Parallel Programming with Java
Options for User Input Options for getting information from the user –Write event-driven code Con: requires a significant amount of new code to set-up.
Parallelization: Conway’s Game of Life. Cellular automata: Important for science Biology – Mapping brain tumor growth Ecology – Interactions of species.
1 Lecture 4: Distributed-memory Computing with PVM/MPI.
Parallel Programming with Java YILDIRAY YILMAZ Maltepe Üniversitesi.
ADLB Update Recent and Current Adventures with the Asynchronous Dynamic Load Balancing Library Rusty Lusk Mathematics and Computer Science Division Argonne.
V Avon High School Tech Club Agenda Old Business –Delete Files New Business –Week 16 Topics: Intro to HTML/CSS –Questions? Tech Club Forums.
High Performance Computation --- A Practical Introduction Chunlin Tian NAOC Beijing 2011.
Parallel Processing LAB NO 1.
CSS Cooperative Education Faculty Research Internship Spring / Summer 2013 Richard Romanus 08/23/2013 Developing and Extending the MASS Library (Java)
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
1 Developing Native Device for MPJ Express Advisor: Dr. Aamir Shafi Co-advisor: Ms Samin Khaliq.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Programming Overview Dr Andy Evans. A few terms from standard programming Process: a self-contained chunk of code running in its own allocated.
Nachos Phase 1 Code -Hints and Comments
CS 11 java track: lecture 1 Administrivia need a CS cluster account cgi-bin/sysadmin/account_request.cgi need to know UNIX
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Hybrid MPI and OpenMP Parallel Programming
Definitions Speed-up Efficiency Cost Diameter Dilation Deadlock Embedding Scalability Big Oh notation Latency Hiding Termination problem Bernstein’s conditions.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
MPJ Express Alon Vice Ayal Ofaim. Contributors 2 Aamir Shafi Jawad Manzoor Kamran Hamid Mohsan Jameel Rizwan Hanif Amjad Aziz Bryan Carpenter Mark Baker.
NA-MIC National Alliance for Medical Image Computing ParaView Server Manager Berk Geveci Andy Cedilnik.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Programming Paradigms By Tyler Smith. Event Driven Event driven paradigm means that the program executes code in reaction to events. The limitation of.
Advanced Computer Networks Lecture 1 - Parallelization 1.
Written by: Dr. JJ Shepherd
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 4: Threads.
M1G Introduction to Programming 2 2. Creating Classes: Game and Player.
NGS computation services: APIs and.
Message Passing Interface Using resources from
NUMA Control for Hybrid Applications Kent Milfeld TACC May 5, 2015.
Programming Parallel Hardware using MPJ Express By A. Shafi.
PVM and MPI.
Lecture 6: Lecturer: Simon Winberg Distributed Memory Systems & MPI vs. OpenMP vula share MPI OpenMP + an observation of 8 March International Womans Day.
A New Distributed Processing Framework
NGS computation services: APIs and Parallel Jobs
Is System X for Me? Cal Ribbens Computer Science Department
Java MPI in MATLAB*P Max Goldman Da Guo.
Shared Memory Programming
Introduction to parallelism and the Message Passing Interface
MPJ: A Java-based Parallel Computing System
MPI MPI = Message Passing Interface
Parallel programming in Java
Parallel programming High Performance Computing
Programming Parallel Computers
Presentation transcript:

Parallel Programming Dr Andy Evans

Parallel programming Various options, but a popular one is the Message Passing Interface (MPI). This is a standard for talking between nodes implemented in a variety of languages. With shared memory systems, we could just write to that, but enacting events around continually checking memory isn’t very efficient. Message passing better. API description formulated by the Java Grande forum. A good implementation is MPJ Express: Language implementation and runtime/manager.

Other implementations mpiJava: P2P-MPI: (well set up for Peer-to-Peer development) Some (like mpiJava) require an underlying C implementation to wrap around, like LAM:

MPJ Express Allows you to use their MPI library to run MPI code. Sorts out communication as well: Runs in Multicore Configuration: i.e. on one PC. Runs each process as a thread, and distributes them around available cores. Great for developing/testing. Also in Cluster Configuration: i.e. on multiple PCs.

How to check processor/core numbers My Computer → Properties Right-click taskbar → Start Task Manager (→ Resource Monitor in Win 8) With Java: Runtime.getRuntime().availableProcessors();

General outline You write the same code for all nodes. However, the behaviour changes depending on the node number. You can also open sockets to other nodes and send them stuff if they are listening. if (node == 0) { listen(); } else { sendData(); } Usually the MPI environment will organise running the code on the other nodes if you tell it to run the code and how many nodes you want.

MPI basics API definition for communicating between Nodes. MPI.Init(args) Call the initiation code MPI.Finalize() with a String[] / Shut down. MPI.COMM_WORLD.Size() Get the number of available nodes. MPI.COMM_WORLD.Rank() Get the node the code is running on Usually within try-catch: } catch (MPIException mpiE) { mpiE.printStackTrace(); }

Load balancing This kind of thing is common: int nodeNumberOfAgents = 0; if (node != 0) { nodeNumberOfAgents = numberOfAgents /(numberOfNodes - 1); if (node == (numberOfNodes – 1)) { nodeNumberOfAgents = nodeNumberOfAgents + (numberOfAgents % (numberOfNodes - 1)); } agents = new Agent[nodeNumberOfAgents]; for (int i = 0; i < nodeNumberOfAgents; i++) { agents[i] = new Agent(); } }

Sending stuff MPI.COMM_WORLD.Send (java.lang.Object,startIndex,lengthToSend, dataType,nodeToSendTo,messageIntId); All sent objects must be 1D arrays, even if only one thing in them. dataType : Array of booleans: MPI.BOOLEAN Array of doubles: MPI.DOUBLE Array of ints: MPI.INT Array of nulls: MPI.NULL Array of objects: MPI.OBJECT Objects must implement java.io.Serializable

Receiving stuff MPI.COMM_WORLD.Recv (java.lang.Object,startIndex,lengthToGet, dataType,nodeSending,messageIntId); Object is a 1D array that gets the data put into it. Might, for example, be in a loop that increments nodeSending, to recv from all nodes.

Other MPI commands Any implementation of the API should have the same methods etc. For MPJ Express, see: