Why patterns? Patterns for Parallel Programming The road ahead
Patterns (Software) Things that repeat. Plans/schemas/motifs “Best Practices” Design vocabulary Literature - pedagogical
Patterns Are not magic Can be misused Not a replacement for experience
Composite Idea: make abstract "component" class. Alternative 1: every component has a (possibly empty) set of components. Component Children ParagraphChapter... Problem: many components have no components
Composite Pattern Component container children CompositeLeaf Composite and Component have the exact same interface. interface for enumerating children Component implements children() by returning empty set interface for adding/removing children?
Lessons learned Patterns are a means to an end Principles are more important than patterns People like to copy code Making mistakes is part of learning
Patterns for Parallel Programming By Timothy Mattson, Beverly Sanders and Berna Massingill Technology independent – Works for MPI, OpenMP, and Java threads Domain independent A pattern language
Algorithm structure Organize by tasks – Linear - task parallelism - reduce dependencies – Recursive - divide and conquer - manage granularity Organize by data decomposition – Linear - geometric decomposition - exchange – Recursive - recursive data - more work, faster Organize by flow of data – Regular - pipeline – Irregular - event-based coordination
My critique In addition to these high-level patterns – Need more technology-dependent patterns – Need domain-dependent patterns – Need smaller-scale patterns High-level patterns are harder to learn – More examples – Divide into smaller patterns (pattern language)
Other patterns Patterns at PLoP by Jorge Ortega-Arjona – “Architectural” - similar in abstraction to PPP – Communication primitives Systems that generate software from patterns – Steven Siu at Waterloo – Macdonald and Szafron at U. of Alberta
Domain Dependent Phil Colella’s 7 dwarfs – Dense and sparse matrices – Structured and unstructured meshes – Particle systems – FFT – Monte Carlo methods Berkeley’s 13 dwarfs/motifs – Graph traversal, branch and bound, dynamic programming, combinatorial logic, FSMs, graphical models
Particle systems Particle-particle Discrete forces Neighborhood of particles Task per interaction Particle-mesh
Exchange in MPI Do i=1,n_neighbors Call MPI_Send(edge, len, MPI_REAL, nbr(i), tag, comm, ierr) Enddo Do i=1,n_neighbors Call MPI_Recv(edge,len,MPI_REAL,nbr(i),tag, comm,status,ierr) Enddo
Provide buffers, receive in any order Do i=1,n_neighbors Call MPI_Irecv(edge,len,MPI_REAL,nbr(i),tag, comm,request(i),ierr) Enddo Do i=1,n_neighbors Call MPI_Send(edge, len, MPI_REAL, nbr(i), tag, comm, ierr) Enddo Call MPI_Waitall(n_neighbors, request, statuses, ierr)
Parallel Programming Patterns Many levels - all are needed High level patterns are hard to learn – Give many examples – Divide into smaller pieces Low level patterns might be easier to learn, but no less important
Real patterns are discovered, not invented Quality of pattern observed by using it So, let’s – discover them, – write them, – see what happens when people try to use them, – and then fix them.