2 Less fish … More fish! Parallelism means doing multiple things at the same time: you can get more work done in the same time.

Slides:



Advertisements
Similar presentations
Parallel Jacobi Algorithm Steven Dong Applied Mathematics.
Advertisements

P3 / 2004 Register Allocation. Kostis Sagonas 2 Spring 2004 Outline What is register allocation Webs Interference Graphs Graph coloring Spilling Live-Range.
Resource Management §A resource can be a logical, such as a shared file, or physical, such as a CPU (a node of the distributed system). One of the functions.
CS 484. Discrete Optimization Problems A discrete optimization problem can be expressed as (S, f) S is the set of all feasible solutions f is the cost.
High Performance Computing and CyberGIS Keith T. Weber, GISP GIS Director, ISU.
CIS December '99 Introduction to Parallel Architectures Dr. Laurence Boxer Niagara University.
Topics Parallel Computing Shared Memory OpenMP 1.
11Sahalu JunaiduICS 573: High Performance Computing5.1 Analytical Modeling of Parallel Programs Sources of Overhead in Parallel Programs Performance Metrics.
CISC October Goals for today: Foster’s parallel algorithm design –Partitioning –Task dependency graph Granularity Concurrency Collective communication.
Reference: Message Passing Fundamentals.
ECE669 L4: Parallel Applications February 10, 2004 ECE 669 Parallel Computer Architecture Lecture 4 Parallel Applications.
1 Friday, September 29, 2006 If all you have is a hammer, then everything looks like a nail. -Anonymous.
Parallel MIMD Algorithm Design
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Parallel Programming Models and Paradigms
Memory Management 2010.
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
Topic Overview One-to-All Broadcast and All-to-One Reduction
Strategies for Implementing Dynamic Load Sharing.
Distributed Process Management1 Learning Objectives Distributed Scheduling Algorithms Coordinator Elections Orphan Processes.
Virtues of Good (Parallel) Software
Mapping Techniques for Load Balancing
Improved results for a memory allocation problem Rob van Stee University of Karlsruhe Germany Leah Epstein University of Haifa Israel WADS 2007 WAOA 2007.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
void ordered_fill (float* array, int array_length) { int index; for (index = 0; index < array_length; index++) { array[index] = index; }
The sequence of graph transformation (P1)-(P2)-(P4) generating an initial mesh with two finite elements GENERATION OF THE TOPOLOGY OF INITIAL MESH Graph.
Calculating Discrete Logarithms John Hawley Nicolette Nicolosi Ryan Rivard.
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
Parallel & Cluster Computing Overview of Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Designing and Evaluating Parallel Programs Anda Iamnitchi Federated Distributed Systems Fall 2006 Textbook (on line): Designing and Building Parallel Programs.
Performance Evaluation of Parallel Processing. Why Performance?
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
Basic Communication Operations Based on Chapter 4 of Introduction to Parallel Computing by Ananth Grama, Anshul Gupta, George Karypis and Vipin Kumar These.
Network Aware Resource Allocation in Distributed Clouds.
SAMANVITHA RAMAYANAM 18 TH FEBRUARY 2010 CPE 691 LAYERED APPLICATION.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Parallel Simulation of Continuous Systems: A Brief Introduction
Dynamic Load Balancing in Charm++ Abhinav S Bhatele Parallel Programming Lab, UIUC.
Design Issues. How to parallelize  Task decomposition  Data decomposition  Dataflow decomposition Jaruloj Chongstitvatana 2 Parallel Programming: Parallelization.
Parallel Programming & Cluster Computing Overview of Parallelism Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education.
INTRODUCTION TO PARALLEL ALGORITHMS. Objective  Introduction to Parallel Algorithms Tasks and Decomposition Processes and Mapping Processes Versus Processors.
Lecture 15- Parallel Databases (continued) Advanced Databases Masood Niazi Torshiz Islamic Azad University- Mashhad Branch
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Basic Communication Operations Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar Reduced slides for CSCE 3030 To accompany the text ``Introduction.
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
CS 484 Load Balancing. Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
Finding concurrency Jakub Yaghob. Finding concurrency design space Starting point for design of a parallel solution Analysis The patterns will help identify.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University.
Data Structures and Algorithms in Parallel Computing Lecture 7.
Static Process Scheduling
Data Structures and Algorithms in Parallel Computing
Lecture 3: Designing Parallel Programs. Methodological Design Designing and Building Parallel Programs by Ian Foster www-unix.mcs.anl.gov/dbpp.
Concurrency and Performance Based on slides by Henri Casanova.
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dr. Xiao Qin Auburn University
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dynamic Mapping Dr. Xiao Qin Auburn University
Department of Computer Science, Johns Hopkins University Lecture 7 Finding Concurrency EN /420 Instructor: Randal Burns 26 February 2014.
Auburn University
Auburn University
Parallel Programming By J. H. Wang May 2, 2017.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Mapping Techniques Dr. Xiao Qin Auburn University.
CS 584 Lecture 3 How is the assignment going?.
Parallel Algorithm Design
Parallel Programming in C with MPI and OpenMP
The Jigsaw Puzzle Metaphor
CS 584.
PERFORMANCE MEASURES. COMPUTATIONAL MODELS Equal Duration Model:  It is assumed that a given task can be divided into n equal subtasks, each of which.
Computational Thinking
Parallel Programming in C with MPI and OpenMP
Presentation transcript:

2 Less fish … More fish! Parallelism means doing multiple things at the same time: you can get more work done in the same time.

3

4 Suppose you want to do a jigsaw puzzle that has, say, a thousand pieces. We can imagine that it’ll take you a certain amount of time. Let’s say that you can put the puzzle together in an hour.

5 If Scott sits across the table from you, then he can work on his half of the puzzle and you can work on yours. Once in a while, you’ll both reach into the pile of pieces at the same time (you’ll contend for the same resource), which will cause a little bit of slowdown. And from time to time you’ll have to work together (communicate) at the interface between his half and yours. The speedup will be nearly 2-to-1: y’all might take 35 minutes instead of 30.

6 Now let’s put Paul and Charlie on the other two sides of the table. Each of you can work on a part of the puzzle, but there’ll be a lot more contention for the shared resource (the pile of puzzle pieces) and a lot more communication at the interfaces. So y’all will get noticeably less than a 4-to-1 speedup, but you’ll still have an improvement, maybe something like 3-to-1: the four of you can get it done in 20 minutes instead of an hour.

7 If we now put Dave and Tom and Kate and Brandon on the corners of the table, there’s going to be a whole lot of contention for the shared resource, and a lot of communication at the many interfaces. So the speedup y’all get will be much less than we’d like; you’ll be lucky to get 5-to-1. So we can see that adding more and more workers onto a shared resource is eventually going to have a diminishing return.

8 Now let’s try something a little different. Let’s set up two tables, and let’s put you at one of them and Scott at the other. Let’s put half of the puzzle pieces on your table and the other half of the pieces on Scott’s. Now y’all can work completely independently, without any contention for a shared resource. BUT, the cost per communication is MUCH higher (you have to scootch your tables together), and you need the ability to split up (decompose) the puzzle pieces reasonably evenly, which may be tricky to do for some puzzles.

9 It’s a lot easier to add more processors in distributed parallelism. But, you always have to be aware of the need to decompose the problem and to communicate among the processors. Also, as you add more processors, it may be harder to load balance the amount of work that each processor gets.

10 Load balancing means ensuring that everyone completes their workload at roughly the same time. For example, if the jigsaw puzzle is half grass and half sky, then you can do the grass and Scott can do the sky, and then y’all only have to communicate at the horizon – and the amount of work that each of you does on your own is roughly equal. So you’ll get pretty good speedup.

11 Load balancing can be easy, if the problem splits up into chunks of roughly equal size, with one chunk per processor. Or load balancing can be very hard. EASY HARD

 Parallel computation = set of tasks  Task ◦ Program ◦ Local memory ◦ Collection of I/O ports  Tasks interact by sending messages through channels

Task Channel

 Partitioning ◦ Dividing the Problem into Tasks  Communication ◦ Determine what needs to be communicated between the Tasks over Channels  Agglomeration ◦ Group or Consolidate Tasks to improve efficiency or simplify the programming solution  Mapping ◦ Assign tasks to the Computer Processors 14

 Domain/Data Decomposition – Data Centric Approach ◦ Divide up most frequently used data ◦ Associate the computations with the divided data  Functional/Task Decomposition – Computation Centric Approach ◦ Divide up the computation ◦ Associate the data with the divided computations  Primitive Tasks: Resulting Pieces from either Decomposition ◦ The goal is to have as many of these as possible

Task Decomposition Decompose a problem by the functions it performs Gardening analogy –Need to mow and weed –Two gardeners One mows One weeds –Need to synchronize a bit so we don’t weed the spot in the yard that is currently being mowed

Data Decomposition Decompose problem by the data worked on Gardening analogy –Need to mow and weed –Two gardeners Each mows and weeds ½ the yard –Each gets its own part of the yard, so less synchronization between mowing/weeding –However Gardeners can’t be specialized Contention for resources (single mower)

Scaling Task –Adding 8 more gardeners is only beneficial if: there are 8 more tasks (raking, blowing, etc) Data –Adding 8 more gardeners is only beneficial if: there are enough mowers for everyone the yard is big enough that the time it takes to get the mower out is worth it for the size mowed

Hybrid Approaches Can combine both approaches –Gardener1 can mow/weed ½ the yard –Gardener2 can mow/weed ½ the yard –Gardener3 can rake/blow ½ the yard –Gardener4 can rake/blow ½ the yard

 Lots of Tasks ◦ e.g, at least 10x more primitive tasks than processors in target computer  Minimize redundant computations and data  Load Balancing ◦ Primitive tasks roughly the same size  Scalable ◦ Number of tasks an increasing function of problem size

 Local Communication ◦ When Tasks need data from a small number of other Tasks ◦ Channel from Producing Task to Consuming Task Created  Global Communication ◦ When Task need data from many or all other Tasks ◦ Channels for this type of communication are not created during this step

 Balanced ◦ Communication operations balanced among tasks  Small degree ◦ Each task communicates with only small group of neighbors  Concurrency ◦ Tasks can perform communications concurrently ◦ Task can perform computations concurrently

 Increase Locality ◦ remove communication by agglomerating Tasks that Communicate with one another ◦ Combine groups of sending & receiving task  Send fewer, larger messages rather than more short messages which incur more message latency.  Maintain Scalability of the Parallel Design ◦ Be careful not to agglomerate Tasks so much that moving to a machine with more processors will not be possible  Reduce Software Engineering costs ◦ Leveraging existing sequential code can reduce the expense of engineering a parallel algorithm

 Eliminate communication between primitive tasks agglomerated into consolidated task  Combine groups of sending and receiving tasks

 Locality of parallel algorithm has increased  Tradeoff between agglomeration and code modifications costs is reasonable  Agglomerated tasks have similar computational and communications costs  Number of tasks increases with problem size  Number of tasks suitable for likely target systems

 Maximize Processor Utilization ◦ Ensure computation is evenly balanced across all processors  Minimize Interprocess Communication 29

 Finding optimal mapping is NP-hard  Must rely on heuristics

 Mapping based on one task per processor and multiple tasks per processor have been considered  Both static and dynamic allocation of tasks to processors have been evaluated  If a dynamic allocation of tasks to processors is chosen, the Task allocator is not a bottleneck  If Static allocation of tasks to processors is chosen, the ratio of tasks to processors is at least 10 to 1 32

Boundary value problem Finding the maximum The n-body problem Adding data input

Ice waterRodInsulation

 One data item per grid point  Associate one primitive task with each grid point  Two-dimensional domain decomposition

 Identify communication pattern between primitive tasks: ◦ Each interior primitive task has three incoming and three outgoing channels

Agglomeration

  – time to update element  n – number of elements  m – number of iterations  Sequential execution time: mn   p – number of processors  – message latency  Parallel execution time m(  n/p  +2 )

6.25%

 Given associative operator   a0  a1  a2  …  an-1  Examples ◦ Add ◦ Multiply ◦ And, Or ◦ Maximum, Minimum

Subgraph of hypercube

178

25 Binomial Tree

sum

 Domain partitioning  Assume one task per particle  Task has particle’s position, velocity vector  Iteration ◦ Get positions of all other particles ◦ Compute new position, velocity

Hypercube Complete graph

 Parallel computation ◦ Set of tasks ◦ Interactions through channels  Good designs ◦ Maximize local computations ◦ Minimize communications ◦ Scale up

 Partition computation  Agglomerate tasks  Map tasks to processors  Goals ◦ Maximize processor utilization ◦ Minimize inter-processor communication

 Reduction  Gather and scatter  All-gather