Download presentation

Presentation is loading. Please wait.

Published byFiona Henington Modified about 1 year ago

1
Parallel Programming Patterns Ralph Johnson

2
Why patterns? Patterns for Parallel Programming The road ahead

3
Patterns (Software) Things that repeat. Plans/schemas/motifs “Best Practices” Design vocabulary Literature - pedagogical

4
Patterns Are not magic Can be misused Not a replacement for experience

5
Composite Idea: make abstract "component" class. Alternative 1: every component has a (possibly empty) set of components. Component Children ParagraphChapter... Problem: many components have no components

6
Composite Pattern Component container children CompositeLeaf Composite and Component have the exact same interface. interface for enumerating children Component implements children() by returning empty set interface for adding/removing children?

7
Lessons learned Patterns are a means to an end Principles are more important than patterns People like to copy code Making mistakes is part of learning

8
Patterns for Parallel Programming By Timothy Mattson, Beverly Sanders and Berna Massingill Technology independent – Works for MPI, OpenMP, and Java threads Domain independent A pattern language

9
Categories of patterns Finding concurrency Algorithm structure Supporting structures Implementation mechanisms (not patterns)

10
Algorithm structure Organize by tasks – Linear - task parallelism - reduce dependencies – Recursive - divide and conquer - manage granularity Organize by data decomposition – Linear - geometric decomposition - exchange – Recursive - recursive data - more work, faster Organize by flow of data – Regular - pipeline – Irregular - event-based coordination

11
Supporting Structures SPMD Master/worker Loop parallelism Fork/join Shared data Shared queue Distributed array

12
My critique In addition to these high-level patterns – Need more technology-dependent patterns – Need domain-dependent patterns – Need smaller-scale patterns High-level patterns are harder to learn – More examples – Divide into smaller patterns (pattern language)

13
Other patterns Patterns at PLoP by Jorge Ortega-Arjona – “Architectural” - similar in abstraction to PPP – Communication primitives Systems that generate software from patterns – Steven Siu at Waterloo – Macdonald and Szafron at U. of Alberta

14
Domain Dependent Phil Colella’s 7 dwarfs – Dense and sparse matrices – Structured and unstructured meshes – Particle systems – FFT – Monte Carlo methods Berkeley’s 13 dwarfs/motifs – Graph traversal, branch and bound, dynamic programming, combinatorial logic, FSMs, graphical models

15
Particle systems Particle-particle Discrete forces Neighborhood of particles Task per interaction Particle-mesh

16
Exchange in MPI Do i=1,n_neighbors Call MPI_Send(edge, len, MPI_REAL, nbr(i), tag, comm, ierr) Enddo Do i=1,n_neighbors Call MPI_Recv(edge,len,MPI_REAL,nbr(i),tag, comm,status,ierr) Enddo

17
Provide buffers, receive in any order Do i=1,n_neighbors Call MPI_Irecv(edge,len,MPI_REAL,nbr(i),tag, comm,request(i),ierr) Enddo Do i=1,n_neighbors Call MPI_Send(edge, len, MPI_REAL, nbr(i), tag, comm, ierr) Enddo Call MPI_Waitall(n_neighbors, request, statuses, ierr)

18
Defer synchronization Do i=1,n_neighbors Call MPI_Irecv(edge,len,MPI_REAL,nbr(i),tag, comm,request(i),ierr) Enddo Do i=1,n_neighbors Call MPI_Isend(edge, len, MPI_REAL, nbr(i), tag, comm, request(n_neighbors+i), ierr) Enddo Call MPI_Waitall(2*n_neighbors, request, statuses, ierr)

19
Parallel Programming Patterns Many levels - all are needed High level patterns are hard to learn – Give many examples – Divide into smaller pieces Low level patterns might be easier to learn, but no less important

20
Real patterns are discovered, not invented Quality of pattern observed by using it So, let’s – discover them, – write them, – see what happens when people try to use them, – and then fix them.

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google