Presentation on theme: "Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 23, 2013."— Presentation transcript:
Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 23, 2013.
4.1 Partitioning Partitioning simply divides the problem into parts and then compute the parts and combine results.
4.1 Divide and Conquer Characterized by dividing problem into sub-problems of same form as larger problem. Further divisions into still smaller sub-problems, usually done by recursion. Recursive divide and conquer amenable to parallelization because separate processes can be used for divided parts. Also usually data is naturally localized.
4.2 Partitioning/Divide and Conquer Examples Many possibilities. Operations on sequences of number such as simply adding them together Several sorting algorithms can often be partitioned or constructed in a recursive fashion Numerical integration
4.3 Partitioning a sequence of numbers into parts and adding the parts
4.4 Tree construction
4.5 Dividing a list into parts
4.6 Partial summation
Many Sorting algorithms can be parallelized by partitioning and by divide and conquer. Example Bucket sort
4.9 Bucket sort One “bucket” assigned to hold numbers that fall within each region. Numbers in each bucket sorted using a sequential sorting algorithm. Sequential sorting time complexity: O(nlog(n/m). Works well if the original numbers uniformly distributed across a known interval, say 0 to a - 1.
4.10 Parallel version of bucket sort Simple approach Assign one processor for each bucket.
4.11 Further Parallelization Partition sequence into m regions. Each processor assigned one region. (Hence number of processors, p, equals m.) Each processor maintains one “big” bucket for its region. Each processor maintains m “small” buckets, one for each region. Each processor separates numbers in its region into its own small buckets. All small buckets emptied into p big buckets Each big bucket sorted by its processor
4.12 Another parallel version of bucket sort
An MPI function that can be used to advantage here. Sends one data element from each process to every other process. Corresponds to multiple scatter operations, but implemented together. Should be called all-to-all scatter? all-to-all broadcast
4.13 Applying “all-to-all” broadcast to emptying small buckets into big buckets Suppose 4 regions and 4 processors Small bucket Big bucket
4.14 “all-to-all” routine actually transfers rows of an array to columns: Transposes a matrix.
Numerical Integration Computing the “area under the curve.” Parallelized by divided area into smaller areas (partitioning). Can also apply divide and conquer -- repeatedly divide the areas recursively.
4.15 Numerical integration using rectangles Each region calculated using an approximation given by rectangles: Aligning the rectangles:
4.16 Numerical integration using trapezoidal method May not be better!
Applying Divide and Conquer
4.17 Adaptive Quadrature Solution adapts to shape of curve. Use three areas, A, B, and C. Computation terminated when largest of A and B sufficiently close to sum of remain two areas.
4.18 Adaptive quadrature with false termination. Some care might be needed in choosing when to terminate. Might cause us to terminate early, as two large regions are the same (i.e., C = 0).