Class CS 775/875, Spring 2011 Amit H. Kumar, OCCS Old Dominion University.

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

Multiple Processor Systems
MPI Message Passing Interface
Distributed Systems Major Design Issues Presented by: Christopher Hector CS8320 – Advanced Operating Systems Spring 2007 – Section 2.6 Presentation Dr.
Distributed Systems CS
Chorus Vs Unix Operating Systems Overview Introduction Design Principles Programmer Interface User Interface Process Management Memory Management File.
Multiprocessors CSE 4711 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor –Although.
Tuesday, September 12, 2006 Nothing is impossible for people who don't have to do it themselves. - Weiler.
Disco: Running Commodity Operating Systems on Scalable Multiprocessors Bugnion et al. Presented by: Ahmed Wafa.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
An Introduction to Parallel Computing Dr. David Cronk Innovative Computing Lab University of Tennessee Distribution A: Approved for public release; distribution.
Yousuf Surmust Instructor: Marius Soneru Course: CS550 Fall 2001
3.5 Interprocess Communication
Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
1 CSE SUNY New Paltz Chapter Nine Multiprocessors.
High Performance Communication using MPJ Express 1 Presented by Jawad Manzoor National University of Sciences and Technology, Pakistan 29 June 2015.
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
The Mach System "Operating Systems Concepts, Sixth Edition" by Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne Presentation by Jonathan Walpole.
Mapping Techniques for Load Balancing
Lecture 1 – Parallel Programming Primer CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Parallel Architectures
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
Distributed Systems: Concepts and Design Chapter 1 Pages
Parallel Computer Architecture and Interconnect 1b.1.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Multiple Processor Systems Chapter Multiprocessors 8.2 Multicomputers 8.3 Distributed systems.
Lecture 4: Sun: 23/4/1435 Distributed Operating Systems Lecturer/ Kawther Abas CS- 492 : Distributed system & Parallel Processing.
1 Parallel Programming Aaron Bloomfield CS 415 Fall 2005.
Distributed Systems CS /640 Programming Models Borrowed and adapted from our good friends at CMU-Doha, Qatar Majd F. Sakr, Mohammad Hammoud andVinay.
CS533 - Concepts of Operating Systems 1 The Mach System Presented by Catherine Vilhauer.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-2.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 4: Threads.
Multiprocessor  Use large number of processor design for workstation or PC market  Has an efficient medium for communication among the processor memory.
Operating Systems Unit 2: – Process Context switch Interrupt Interprocess communication – Thread Thread models Operating Systems.
Parallel Computing Presented by Justin Reschke
Background Computer System Architectures Computer System Software.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Group Members Hamza Zahid (131391) Fahad Nadeem khan Abdual Hannan AIR UNIVERSITY MULTAN CAMPUS.
These slides are based on the book:
Chapter 4: Multithreaded Programming
Lecture 1 – Parallel Programming Primer
Introduction to parallel programming
Parallel Programming By J. H. Wang May 2, 2017.
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
CS 147 – Parallel Processing
Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang
Parallel Programming in C with MPI and OpenMP
Chapter 4: Threads.
Chapter 4: Threads.
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Distributed Systems CS
Shared Memory. Distributed Memory. Hybrid Distributed-Shared Memory.
Hybrid Programming with OpenMP and MPI
CHAPTER 4:THreads Bashair Al-harthi OPERATING SYSTEM
Types of Parallel Computers
Light-Weight Process (Threads)
Presentation transcript:

Class CS 775/875, Spring 2011 Amit H. Kumar, OCCS Old Dominion University

Take Away  Motivation  Basic difference between: MPI vs. RPC  Parallel Computer Memory Architectures  Parallel Programming Models  MPI Definition  MPI Examples

Motivation Work Queues  Work queues allow threads from one task to send processing work to another task in a decoupled fashion P P C C Shared Queue ProducerConsumer

Motivation …  Make this work in a distributed setting … P P C C Shared Network Queue ProducerConsumer

MPI vs. RPC  In simple terms they both are methods of Inter-Process Communication (IPC)  And both fall between Transport Layer and Application layer in the OSI model. FeaturesMPIRPC Portability Massively Parallel Computing Systems & Cluster of workstations Integral part of all OS* (Not completely sure about every OS in the market) Process Creation Static & Dynamic Topology Support SupportedNot Supported Load balancing ExcellentNot Supported

Parallel Computing one liner  Ultimately, parallel computing is an attempt to maximize the infinite but seemingly scarce commodity called time.

Parallel Computer Memory Architectures  Shared Memory: Shared memory parallel computers vary widely, but generally have in common the ability for all processors to access all memory as global address space.  Uniform Memory Access (UMA). Ex. SMP  Non-Uniform Memory Access (NUMA). Ex two or more SMPs linked.

Parallel Computer Memory Architectures…  Distributed Memory:  Like shared memory systems, distributed memory systems vary widely but share a common characteristic. Distributed memory systems require a communication network to connect inter-processor memory.  Memory addresses in one processor do not map to another processor, so there is no concept of global address space across all processors.

Parallel Computer Memory Architectures…  Hybrid Distributed-Shared Memory:  The largest and fastest computers in the world today employ both shared and distributed memory architectures.

Parallel Programming Models Parallel programming models exist as an abstraction above hardware and memory architectures. There are several parallel programming models in common use:  Shared Memory  tasks share a common address space, which they read and write asynchronously.  Threads  a single process can have multiple, concurrent execution paths. Ex implementations: POSIX threads & OpenMP  Message Passing  tasks exchange data through communications by sending and receiving messages. Ex: MPI & MPI-2 specification.  Data Parallel  tasks perform the same operation on their partition of work. Ex: High Performance Fortran – support data parallel constructs.  Hybrid

Define MPI  M P I = Message Passing Interface  In short:  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a library - but rather the specification of what such a library should be.  Few implementations of MPI: MPICH,MPICH2, MVAPICH, MVAPICH2…

MPI Examples… Getting Started 

MPI uses objects called communicators and groups to define which collection of processes may communicate with each other.

Rank  Within a communicator, every process has its own unique, integer identifier assigned by the system when the process initializes. A rank is sometimes also called a "task ID". Ranks are contiguous and begin at zero.  Used by the programmer to specify the source and destination of messages. Often used conditionally by the application to control program execution (if rank=0 do this / if rank=1 do that).

First C Program

References  References of the content in the presentation is available upon request.  MPICH2 download: pich2/downloads/index.php?s=downloads pich2/downloads/index.php?s=downloads  MVAPICH2 dowload: state.edu/download/mvapich2/ state.edu/download/mvapich2/