James Edwards and Uzi Vishkin University of Maryland 1.

Slides:



Advertisements
Similar presentations
DIMACS Workshop on Parallelism: A 2020 Vision Alejandro Salinger University of Waterloo March 16, 2011.
Advertisements

Lecture 6: Multicore Systems
11Sahalu JunaiduICS 573: High Performance Computing5.1 Analytical Modeling of Parallel Programs Sources of Overhead in Parallel Programs Performance Metrics.
The Analysis and Design of Approximation Algorithms for the Maximum Induced Planar Subgraph Problem Kerri Morgan Supervisor: Dr. G. Farr.
Revisiting a slide from the syllabus: CS 525 will cover Parallel and distributed computing architectures – Shared memory processors – Distributed memory.
11 1 Hierarchical Coarse-grained Stream Compilation for Software Defined Radio Yuan Lin, Manjunath Kudlur, Scott Mahlke, Trevor Mudge Advanced Computer.
Reference: Message Passing Fundamentals.
Graph Analysis with High Performance Computing by Bruce Hendrickson and Jonathan W. Berry Sandria National Laboratories Published in the March/April 2008.
George Caragea,and Uzi Vishkin University of Maryland 1 Speaker James Edwards.
1 Lecture 1: Parallel Architecture Intro Course organization:  ~5 lectures based on Culler-Singh textbook  ~5 lectures based on Larus-Rajwar textbook.
Better Speedups for Parallel Max-Flow George C. Caragea Uzi Vishkin Dept. of Computer Science University of Maryland, College Park, USA June 4 th, 2011.
CSE 421 Algorithms Richard Anderson Lecture 4. What does it mean for an algorithm to be efficient?
Parallel Computing Approaches & Applications Arthur Asuncion April 15, 2008.
Implications for Programming Models Todd C. Mowry CS 495 September 12, 2002.
Teaching Parallelism Panel, SPAA11 Uzi Vishkin, University of Maryland.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
XMT-GPU A PRAM Architecture for Graphics Computation Tom DuBois, Bryant Lee, Yi Wang, Marc Olano and Uzi Vishkin.
Programmability and Portability Problems? Time for Hardware Upgrades Uzi Vishkin ~2003 Wall Street traded companies gave up the safety of the only paradigm.
Principles/theory matter and can matter more: Big lead of PRAM algorithms on prototype-HW Uzi Vishkin There is nothing more practical than a good theory--
Juan Mendivelso.  Serial Algorithms: Suitable for running on an uniprocessor computer in which only one instruction executes at a time.  Parallel Algorithms:
Pregel: A System for Large-Scale Graph Processing
Project Mentor – Prof. Alan Kaminsky
Computer System Architectures Computer System Software
James A. Edwards, Uzi Vishkin University of Maryland.
11 If you were plowing a field, which would you rather use? Two oxen, or 1024 chickens? (Attributed to S. Cray) Abdullah Gharaibeh, Lauro Costa, Elizeu.
Graph Algorithms for Irregular, Unstructured Data John Feo Center for Adaptive Supercomputing Software Pacific Northwest National Laboratory July, 2010.
Bulk Synchronous Parallel Processing Model Jamie Perkins.
CS6963 L15: Design Review and CUBLAS Paper Discussion.
Architectural Support for Fine-Grained Parallelism on Multi-core Architectures Sanjeev Kumar, Corporate Technology Group, Intel Corporation Christopher.
Uncovering the Multicore Processor Bottlenecks Server Design Summit Shay Gal-On Director of Technology, EEMBC.
Multi-core systems System Architecture COMP25212 Daniel Goodman Advanced Processor Technologies Group.
Efficient Mapping onto Coarse-Grained Reconfigurable Architectures using Graph Drawing based Algorithm Jonghee Yoon, Aviral Shrivastava *, Minwook Ahn,
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
LogP and BSP models. LogP model Common MPP organization: complete machine connected by a network. LogP attempts to capture the characteristics of such.
Parallel Computing Department Of Computer Engineering Ferdowsi University Hossain Deldari.
Supercomputing ‘99 Parallelization of a Dynamic Unstructured Application using Three Leading Paradigms Leonid Oliker NERSC Lawrence Berkeley National Laboratory.
MotivationFundamental ProblemsProblems on Graphs Parallel processors are becoming common place. Each core of a multi-core processor consists of a CPU and.
- 1 - EE898_HW/SW Partitioning Hardware/software partitioning  Functionality to be implemented in software or in hardware? No need to consider special.
The Cosmic Cube Charles L. Seitz Presented By: Jason D. Robey 2 APR 03.
System-level power analysis and estimation September 20, 2006 Chong-Min Kyung.
Department of Computer Science MapReduce for the Cell B. E. Architecture Marc de Kruijf University of Wisconsin−Madison Advised by Professor Sankaralingam.
Data Structures and Algorithms in Parallel Computing Lecture 2.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
Basic Linear Algebra Subroutines (BLAS) – 3 levels of operations Memory hierarchy efficiently exploited by higher level BLAS BLASMemor y Refs. FlopsFlops/
Data Structures and Algorithms in Parallel Computing Lecture 1.
Data Structures and Algorithms in Parallel Computing Lecture 7.
Introduction to Graph Theory Lecture 17: Graph Searching Algorithms.
Shouqing Hao Institute of Computing Technology, Chinese Academy of Sciences Processes Scheduling on Heterogeneous Multi-core Architecture.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
CSE 421 Algorithms Richard Anderson Winter 2009 Lecture 5.
3/12/2013Computer Engg, IIT(BHU)1 PRAM ALGORITHMS-3.
Background Computer System Architectures Computer System Software.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
1 of 14 Lab 2: Design-Space Exploration with MPARM.
Page 1 2P13 Week 1. Page 2 Page 3 Page 4 Page 5.
Heterogeneous Processing KYLE ADAMSKI. Overview What is heterogeneous processing? Why it is necessary Issues with heterogeneity CPU’s vs. GPU’s Heterogeneous.
Algorithms and Networks
Distributed and Parallel Processing
CS5102 High Performance Computer Systems Thread-Level Parallelism
Spare Register Aware Prefetching for Graph Algorithms on GPUs
XMT Another PRAM Architectures
Software Cache Coherent Control by Parallelizing Compiler
Chapter 4: Threads.
Chapter 4: Threads.
University of Wisconsin-Madison
EE 4xx: Computer Architecture and Performance Programming
SPQR Tree.
Chapter 4: Threads & Concurrency
Lecture 2 The Art of Concurrency
Chapter 4 Multiprocessors
Presentation transcript:

James Edwards and Uzi Vishkin University of Maryland 1

 Motivation ◦ Begin with a theory of parallel algorithms (PRAM) ◦ Develop an architecture (XMT) based on theory ◦ Validate theory using architecture ◦ Validate architecture using theory  In order to validate XMT, we need to move beyond simple benchmark kernels ◦ This is in line with the history of benchmarking of performance (e.g. SPEC)  Triconnectivity is the most complex algorithm that has been tested on XMT. ◦ Only one serial implementation is publically available, and no prior parallel implementation ◦ Prior work of similar complexity on XMT includes biconnectivity [EV12- PMAM/PPoPP] and maximum flow [CV11-SPAA]. 2

advanced planarity testing advanced triconnectivityplanarity testing triconnectivityst-numbering k-edge/vertex connectivity minimum spanning forest Euler tours ear decompo- sition search bicon- nectivity strong orientation centroid decomposition tree contraction lowest common ancestors graph connectivity tree Euler tour list ranking 2-ruling set prefix-sumsdeterministic coin tossing

Input graph GTriconnected components of G

 High-level structure ◦ Key insight for serial and parallel algorithms: separation pairs lie on cycles in the input graph ◦ Serial [HT73]: use depth-first search. ◦ Parallel [RV88, MR92]: use an ear decomposition Ear decomposition of G E1E2E3

 Low-level structure ◦ The bulk of the algorithm lies in general subroutines such as graph connectivity. ◦ Implementation of the triconnectivity algorithm was greatly assisted by reuse of a library developed during earlier work on biconnectivity (PMAM ‘12).  Using this library, a majority of students successfully completed a programming assignment on biconnectivity in 2-3 weeks in a grad course on parallel algorithms. 6

 The Explicit Multi-Threading (XMT) architecture was developed at the University of Maryland with the following goals in mind: ◦ Good performance on parallel algorithms of any granularity ◦ Support for regular or irregular memory access ◦ Efficient execution of code derived from PRAM algorithms  A 64-processor FPGA hardware prototype and a software toolchain (compiler and simulator) exist; the latter is freely available for download. 7

 Random graph: Edges are added at random between unique pairs of vertices  Planar3 graph: Vertices are added in layers of three; each vertex in a layer is connected to the other vertices in the layer and two vertices of the preceding layer  Ladder: Similar to Planar3, but with two vertices per layer 8

9

10 T(n, m, s) = (2.38n m s) log 2 n

 The speedups presented here (up to 129x) in conjunction with prior results for biconnectivity (up to 33x) and max-flow (up to 108x) demonstrates that the advantage of XMT is not limited to small kernels. ◦ Biconnectivity was an exceptional challenge due to the compactness of the serial algorithm.  This work completes the capstone of the proof- of-concept of PRAM algorithms on XMT.  With this work, we now have the foundation in place to advance to work on applications. 11

 [CV11-SPAA] G. Caragea, U. Vishkin. Better Speedups for Parallel Max-Flow. Brief Announcement, SPAA  [EV12-PMAM] J. Edwards and U. Vishkin. Better Speedups Using Simpler Parallel Programming for Graph Connectivity and Biconnectivity. PMAM,  [EV12-SPAA] J. Edwards and U. Vishkin. Brief Announcement: Speedups for Parallel Graph Triconnectivity. SPAA,  [HT73] J. E. Hopcroft and R. E. Tarjan. Dividing a graph into triconnected components. SIAM J. Computing, 2(3):135–158,

 [MR92] G. L. Miller and V. Ramachandran. A new graph triconnectivity algorithm and its parallelization. Combinatorica, 12(1):53–76,  [KTCBV11] F. Keceli, A. Tzannes, G. Caragea, R. Barua and U. Vishkin. Toolchain for programming, simulating and studying the XMT many-core architecture. Proc. 16th Int. Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS), in conjunction with IPDPS, Anchorage, Alaska, May 20,

 [RV88] V. Ramachandran and U. Vishkin. Efficient parallel triconnectivity in logarithmic time. In Proc. AWOC, pages 33–42,  [TV85] R. E. Tarjan and U. Vishkin. An Efficient Parallel Biconnectivity Algorithm. SIAM J. Computing, 14(4):862–874,  [WV08] X. Wen and U. Vishkin. FPGA-Based Prototype of a PRAM-on-Chip Processor. In Proceedings of the 5th Conference on Computing Frontiers, CF ’08, pages 55–66, New York, NY, USA, ACM. 14

15

 PRAM algorithms are not a good match for current hardware: ◦ Fine-grained parallelism = overheads  Requires managing many threads  Synchronization and communication are expensive  Clustering reduces granularity, but at the cost of load balancing ◦ Irregular memory accesses = poor locality  Cache is not used efficiently  Performance becomes sensitive to memory latency 16

 Main feature of XMT: Using similar hardware resources (e.g. silicon area, power consumption) as existing CPUs and GPUs, provide a platform that to a programmer looks as close to a PRAM as possible. ◦ Instead of ~8 “heavy” processor cores, provide ~1,024 “light” cores for parallel code and one “heavy” core for serial code. ◦ Devote on-chip bandwidth to a high-speed interconnection network rather than maintaining coherence between private caches. 17

◦ For the PRAM algorithm presented, the number of HW threads is more important than the processing power per thread because they happen to perform more work than an equivalent serial algorithm. This cost is overridden by sufficient parallelism in hardware. ◦ Balance between the tight synchrony of the PRAM and hardware constraints (such as locality) is obtained through support for fine-grained multithreaded code, where a thread can advance at it own speed between (a form of) synchronization barriers. 18

 Maximal planar graph ◦ Built layer by layer ◦ The first layer has three vertices and three edges. ◦ Each additional layer has three vertices and nine edges. 19