Patterns for Parallel Programming Markos Chandras Salil Ponde Wenjiang Xu COMP60611: Fundamentals of Parallel and Distributed Systems (2010) School of.

Slides:



Advertisements
Similar presentations
Distributed Systems Major Design Issues Presented by: Christopher Hector CS8320 – Advanced Operating Systems Spring 2007 – Section 2.6 Presentation Dr.
Advertisements

MINJAE HWANG THAWAN KOOBURAT CS758 CLASS PROJECT FALL 2009 Extending Task-based Programming Model beyond Shared-memory Systems.
Fault-Tolerance for Distributed and Real-Time Embedded Systems
A NOVEL APPROACH TO SOLVING LARGE-SCALE LINEAR SYSTEMS Ken Habgood, Itamar Arel Department of Electrical Engineering & Computer Science GABRIEL CRAMER.
1 Chapter 1 Why Parallel Computing? An Introduction to Parallel Programming Peter Pacheco.
Parallel Programming Patterns Ralph Johnson. Why patterns? Patterns for Parallel Programming The road ahead.
Parallel Programming Motivation and terminology – from ACM/IEEE 2013 curricula.
Requirements on the Execution of Kahn Process Networks Marc Geilen and Twan Basten 11 April 2003 /e.
A system Performance Model Instructor: Dr. Yanqing Zhang Presented by: Rajapaksage Jayampthi S.
CISC 879 : Software Support for Multicore Architectures John Cavazos Dept of Computer & Information Sciences University of Delaware
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Agent-Based Coordination of Sensor Networks Alex Rogers School of Electronics and Computer Science University of Southampton
Message Passing Fundamentals Self Test. 1.A shared memory computer has access to: a)the memory of other nodes via a proprietary high- speed communications.
Lock Inference for Systems Software John Regehr Alastair Reid University of Utah March 17, 2003.
Active Messages: a Mechanism for Integrated Communication and Computation von Eicken et. al. Brian Kazian CS258 Spring 2008.
3.5 Interprocess Communication
CS533 Concepts of Operating Systems Class 2 The Duality of Threads and Events.
SWE Introduction to Software Engineering
OPL: Our Pattern Language. Background Design Patterns: Elements of Reusable Object-Oriented Software o Introduced patterns o Very influential book Pattern.
Architectural Design Establishing the overall structure of a software system Objectives To introduce architectural design and to discuss its importance.
Mapping Techniques for Load Balancing
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Programming Massively Parallel Processors.
Charm++ Load Balancing Framework Gengbin Zheng Parallel Programming Laboratory Department of Computer Science University of Illinois at.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
Rechen- und Kommunikationszentrum (RZ) Parallelization at a Glance Christian Terboven / Aachen, Germany Stand: Version 2.3.
Two or more disks Capacity is the same as the total capacity of the drives in the array No fault tolerance-risk of data loss is proportional to the number.
SBSE Course 4. Overview: Design Translate requirements into a representation of software Focuses on –Data structures –Architecture –Interfaces –Algorithmic.
Heterogeneous Parallelization for RNA Structure Comparison Eric Snow, Eric Aubanel, and Patricia Evans University of New Brunswick Faculty of Computer.
Lecture 9: Chapter 9 Architectural Design
By Xiangzhe Li Thanh Nguyen.  Introduction  Terminology  Architecture  Component  Connector  Configuration  Architectural Style  Architectural.
SOFTWARE DESIGN.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Advanced / Other Programming Models Sathish Vadhiyar.
Architectural Design lecture 10. Topics covered Architectural design decisions System organisation Control styles Reference architectures.
SE: CHAPTER 7 Writing The Program
Parallel Simulation of Continuous Systems: A Brief Introduction
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Basic Parallel Programming Concepts Computational.
DEV 315 –Name –Problem –Context –Forces –Solution –Invariant –Examples –Known Uses –Related patterns : Characteristics :
10/02/2012CS4230 CS4230 Parallel Programming Lecture 11: Breaking Dependences and Task Parallel Algorithms Mary Hall October 2,
Distribution and components. 2 What is the problem? Enterprise computing is Large scale & complex: It supports large scale and complex organisations Spanning.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Shuman Guo CSc 8320 Advanced Operating Systems
Winter 2007SEG2101 Chapter 111 Chapter 11 Implementation Design.
CS 484 Load Balancing. Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
“Politehnica” University of Timisoara Course No. 3: Project E MBRYONICS Evolvable Systems Winter Semester 2010.
Finding concurrency Jakub Yaghob. Finding concurrency design space Starting point for design of a parallel solution Analysis The patterns will help identify.
FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture FIT5174 Distributed & Parallel Systems Lecture 5 Message Passing and MPI.
Compiling Fortran D For MIMD Distributed Machines Authors: Seema Hiranandani, Ken Kennedy, Chau-Wen Tseng Published: 1992 Presented by: Sunjeev Sikand.
Stacey Levine Chapter 4.1 Message Passing Communication.
Memory Coherence in Shared Virtual Memory System ACM Transactions on Computer Science(TOCS), 1989 KAI LI Princeton University PAUL HUDAK Yale University.
A uGNI-Based Asynchronous Message- driven Runtime System for Cray Supercomputers with Gemini Interconnect Yanhua Sun, Gengbin Zheng, Laximant(Sanjay) Kale.
High Performance LU Factorization for Non-dedicated Clusters Toshio Endo, Kenji Kaneda, Kenjiro Taura, Akinori Yonezawa (University of Tokyo) and the future.
Chapel: User-Defined Distributions Brad Chamberlain Cray Inc. CSEP 524 May 20, 2010.
Theory-Aided Model Checking of Concurrent Transition Systems Guy Katz, Clark Barrett, David Harel New York University Weizmann Institute of Science.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
Parallel and Distributed Simulation Deadlock Detection & Recovery: Performance Barrier Mechanisms.
Two New UML Diagram Types Component Diagram Deployment Diagram.
Application of Design Patterns to Geometric Decompositions V. Balaji, Thomas L. Clune, Robert W. Numrich and Brice T. Womack.
Auburn University
Parallel Patterns.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Mapping Techniques Dr. Xiao Qin Auburn University.
Parallel and Distributed Simulation Techniques
Parallel Algorithm Design
Load Balancing: List Scheduling
Indirect Communication Paradigms (or Messaging Methods)
Indirect Communication Paradigms (or Messaging Methods)
Load Balancing: List Scheduling
Operating System Overview
Design.
Mattan Erez The University of Texas at Austin
Presentation transcript:

Patterns for Parallel Programming Markos Chandras Salil Ponde Wenjiang Xu COMP60611: Fundamentals of Parallel and Distributed Systems (2010) School of Computer Science University of Manchester

Distributed data moving mechanisms Reduced synchronization overhead Fine-grain parallelism Difficulties in construction and managment of memories for data matching Load balancing problems in large systems Input/output data define the task interface Hidden implementation details No relationship among tasks is explicitly defined The scheduler manages the tasks via the interface Optimized scheduling using data packing,, data reuse Fault tolerated, task migration References The 'data-driven' model 1) Dan I. Moldovan (1993) - ”'Parallel Processing – From Applications to Systems” (page 399) 2) V. D. Tran, L. Hluchý, G. T. Nguyen - “Parallel Programming with Data Driven Model” ( Parallel and Distributed Processing, Proceedings. 8th Euromicro Workshop, Rhodos, Greece, 2000 )

Dealing effectively with large arrays Array elements should be close to the Processing Element (PE) which processes it Ways of distribution (a) One Dimensional (b) Two Dimesional (c) Block Cyclic Problem is defined in terms of global array indices Program is written in terms of local array indices Choosing the way of distribution References The Distributed Array Pattern 1) Timothy G. Mattson; Beverly A. Sanders; Berna L. Massingill (2004) - ”'Patterns for Parallel Programming”

Semi-independent tasks interact in an irregular fashion. (a) interaction is determined by the flow of data (b) the data implies the ordering constrains between the task (c) direct graph with cycles Solution: events & tasks (a) Task: receive event-->process event -->(possibly) send events (b) Event Flow: need a form of asychronous communication of events message-passing: from task that generate the event to task that process it (c) Avoiding deadlock (1) more informational (2) timeouts (d) One task per PE or Multiple task (need good load balance) (e) Communication if events should be efficient References The Event-Based Coordination Pattern 1) Timothy G. Mattson; Beverly A. Sanders; Berna L. Massingill (2004) - ”'Patterns for Parallel Programming”

List of References Dan I. Moldovan (1993). Parallel Processing – From Applications to Systems. Morgan Kaufmann Publishers Inc. San Francisco, CA, USA V. D. Tran, L. Hluchý, G. T. Nguyen Parallel Programming with Data Driven Model. Parallel and Distributed Processing, Proceedings. 8th Euromicro Workshop, Rhodos, Greece, 2000 Timothy G. Mattson, Beverly A. Sanders, Berna L. Massingill (2004). Patterns for Parallel Programming. Addison-Wesley