1 Datamation Sort 1 Million Record Sort using OpenMP and MPI Sammie Carter Department of Computer Science N.C. State University November 18, 2004.

Slides:



Advertisements
Similar presentations
OpenMP.
Advertisements

Introductions to Parallel Programming Using OpenMP
May 2, 2015©2006 Craig Zilles1 (Easily) Exposing Thread-level Parallelism  Previously, we introduced Multi-Core Processors —and the (atomic) instructions.
1 Programming Explicit Thread-level Parallelism  As noted previously, the programmer must specify how to parallelize  But, want path of least effort.
Scientific Programming OpenM ulti- P rocessing M essage P assing I nterface.
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
OpenMPI Majdi Baddourah
MPI (Message Passing Interface) Basics
Csinparallel.org Patterns and Exemplars: Compelling Strategies for Teaching Parallel and Distributed Computing to CS Undergraduates Libby Shoop Joel Adams.
Parallel Processing LAB NO 1.
1 Parallel Programming With OpenMP. 2 Contents  Overview of Parallel Programming & OpenMP  Difference between OpenMP & MPI  OpenMP Programming Model.
Programming with Shared Memory Introduction to OpenMP
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 5 Shared Memory Programming with OpenMP An Introduction to Parallel Programming Peter Pacheco.
Executing OpenMP Programs Mitesh Meswani. Presentation Outline Introduction to OpenMP Machine Architectures Shared Memory (SMP) Distributed Memory MPI.
COM503 Parallel Computer Architecture & Programming
Lecture 5: Shared-memory Computing with Open MP. Shared Memory Computing.
Lecture 8: Caffe - CPU Optimization
MPI3 Hybrid Proposal Description
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
OpenMP Blue Waters Undergraduate Petascale Education Program May 29 – June
O PEN MP (O PEN M ULTI -P ROCESSING ) David Valentine Computer Science Slippery Rock University.
04/10/25Parallel and Distributed Programming1 Shared-memory Parallel Programming Taura Lab M1 Yuuki Horita.
Hybrid MPI and OpenMP Parallel Programming
Parallel Programming with MPI By, Santosh K Jena..
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Introduction to OpenMP Eric Aubanel Advanced Computational Research Laboratory Faculty of Computer Science, UNB Fredericton, New Brunswick.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
MPI and OpenMP.
Threaded Programming Lecture 2: Introduction to OpenMP.
Parallel Computing in Numerical Simulation of Laser Deposition The objective of this proposed project is to research and develop an effective prediction.
Parallel Systems Lecture 10 Dr. Guy Tel-Zur. Administration Home assignments status Final presentation status – Open Excel file ps2013a.xlsx Allinea DDT.
Introduction to Pragnesh Patel 1 NICS CSURE th June 2015.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Message Passing Interface Using resources from
1 Advanced MPI William D. Gropp Rusty Lusk and Rajeev Thakur Mathematics and Computer Science Division Argonne National Laboratory.
1 Sammie Carter Department of Computer Science N.C. State University November 18, 2004
Heterogeneous Computing using openMP lecture 1 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Lecture 6: Lecturer: Simon Winberg Distributed Memory Systems & MPI vs. OpenMP vula share MPI OpenMP + an observation of 8 March International Womans Day.
Distributed and Parallel Processing George Wells.
Introduction to Parallel Computing What is parallel computing? A computational method that utilizes multiple processing elements to solve a.
Introduction to OpenMP
Lecture 5: Shared-memory Computing with Open MP
SHARED MEMORY PROGRAMMING WITH OpenMP
CS427 Multicore Architecture and Parallel Computing
Computer Engg, IIT(BHU)
MPI Message Passing Interface
Introduction to OpenMP
September 4, 1997 Parallel Processing (CS 667) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Parallel Processing.
Lecture 14: Distributed Memory Systems & MPI vs. OpenMP
Message Passing Models
September 4, 1997 Parallel Processing (CS 730) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Wed. Jan. 31, 2001 *Parts.
September 4, 1997 Parallel Processing (CS 730) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson *Parts of this lecture.
Lecture 10 Dr. Guy Tel-Zur.
DNA microarrays. Infinite Mixture Model-Based Clustering of DNA Microarray Data Using openMP.
Programming with Shared Memory
Hybrid Parallel Programming
Introduction to OpenMP
Programming with Shared Memory
Hybrid MPI and OpenMP Parallel Programming
Introduction to Parallel Computing
Working in The IITJ HPC System
MPI Message Passing Interface
Programming Parallel Computers
Presentation transcript:

1 Datamation Sort 1 Million Record Sort using OpenMP and MPI Sammie Carter Department of Computer Science N.C. State University November 18, 2004

2 Background  The Datamation sorting benchmark was introduced in 1985 by a group of database experts as a test of a processor's I/O subsystem and operating system.  The performance metric is the time to sort 1 million 100-byte records, where the first 10- bytes are the key.

3 Sorting Review  The common sorting algorithms can be divided into two classes by the complexity of their algorithms. Algorithmic complexity is a complex subject (imagine that!) that would take too much time to explain here, but suffice it to say that there's a direct correlation between the complexity of an algorithm and its relative efficiency. Algorithmic complexity is generally written in a form known as Big-O notation, where the O represents the complexity of the algorithm and a value n represents the size of the set the algorithm is run against.  The two classes of sorting algorithms are O(n 2 ), which includes the bubble, insertion, selection, and shell sorts; and O(n log n) which includes the heap, merge, and quick sorts.  O(logn) < O(n ) < O(nlogn) < O(n2) < O(n3) < O(2n)

4 O(n 2 ) Sorts

5 O(n log n) Sorts

6 Sorting Example  Sorting Example Sorting Example

7 OpenMP  OpenMP is an industry standard application programming interface (API) for writing parallel applications for shared memory computers. At the heart of OpenMP are directives, or pragmas, programmers insert to incrementally add parallelism to a program.

8 OpenMP – Hello World #include main () { int nthreads, tid; /* Fork a team of threads giving them their own copies of variables */ #pragma omp parallel private(nthreads, tid) { /* Obtain thread number */ tid = omp_get_thread_num(); printf("Hello World from thread = %d\n", tid); /* Only master thread does this */ if (tid == 0) { nthreads = omp_get_num_threads(); printf("Number of threads = %d\n", nthreads); } } /* All threads join master thread and disband */ }

9 MPI – Message Passing Interface  MPI defines a standard library for message passing that can be used to develop portable message passing programs using either C or Fortran.  The MPI stand defines both the syntax as well as the semantics of a core set of library routines that are very useful in writing message-passing programs.  MPI was developed by a group of researchers from academia and industry, and has enjoyed wide support by almost all the hardware vendors. Vendor implementations of MPI are available on almost all commercial parallel computers.

10 MPI – Hello World /* mpicc –o helloworld helloworld.c */ /* bsub –W 2 –I –n 4 mpiexec./helloworld */ #include "mpi.h" #include int main(int argc,char *argv[]) { int myrank, numprocs; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&myrank); MPI_Comm_size(MPI_COMM_WORLD,&numprocs) printf("Hello World from process %d of %d\n", myrank, numprocs); MPI_Finalize(); return 0; }

11 OpenMP and MPI Examples  OpenMP Hello World omp_hello.c (mcrae)  OpenMP Sort ompmerge3.c (mcrae)  MPI Hello World helloworld.c (henry2)  MPI Sort mpimerge5.c (henry2)

12 Conclusion  Final Sorting Algorithm Description  Questions / Comments / Suggestions  Contact Information: Sammie Carter

13 References  Sorting Algorithms  Sorting Algorithms Demo  Parallel Programming with OpenMP  Grid Computing (Barry Wilkinson)