Models and Languages for Parallel Computation

Slides:



Advertisements
Similar presentations
Agenda Definitions Evolution of Programming Languages and Personal Computers The C Language.
Advertisements

Copyright © 1998 by Addison Wesley Longman, Inc. 1 Chapter One Preliminaries, including –Why study PL concepts? –Programming domains –PL evaluation criteria.
Parallel Programming Languages Andrew Rau-Chaplin.
Revisiting a slide from the syllabus: CS 525 will cover Parallel and distributed computing architectures – Shared memory processors – Distributed memory.
Reference: Message Passing Fundamentals.
Advanced Topics in Algorithms and Data Structures An overview of the lecture 2 Models of parallel computation Characteristics of SIMD models Design issue.
16/13/2015 3:30 AM6/13/2015 3:30 AM6/13/2015 3:30 AMIntroduction to Software Development What is a computer? A computer system contains: Central Processing.
Parallel Programming Models and Paradigms
Programming Languages Structure
Chapter 16 Programming and Languages: Telling the Computer What to Do.
Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
1 New Architectures Need New Languages A triumph of optimism over experience! Ian Watson 3 rd July 2009.
Models of Parallel Computation Advanced Algorithms & Data Structures Lecture Theme 12 Prof. Dr. Th. Ottmann Summer Semester 2006.
Parallel and Distributed IR
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
Copyright © 1998 by Addison Wesley Longman, Inc. 1 Concepts of Programming Languages Chapter 1.
1.3 Executing Programs. How is Computer Code Transformed into an Executable? Interpreters Compilers Hybrid systems.
Project Proposal (Title + Abstract) Due Wednesday, September 4, 2013.
Parallel Architectures
Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel.
Computer Architecture Parallel Processing
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
High level & Low level language High level programming languages are more structured, are closer to spoken language and are more intuitive than low level.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
CS 363 Comparative Programming Languages
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Slides Courtesy Michael J. Quinn Parallel Programming in C.
CSE 260 – Parallel Processing UCSD Fall 2006 A Performance Characterization of UPC Presented by – Anup Tapadia Fallon Chen.
AN EXTENDED OPENMP TARGETING ON THE HYBRID ARCHITECTURE OF SMP-CLUSTER Author : Y. Zhao 、 C. Hu 、 S. Wang 、 S. Zhang Source : Proceedings of the 2nd IASTED.
Guiding Principles. Goals First we must agree on the goals. Several (non-exclusive) choices – Want every CS major to be educated in performance including.
Institute for Software Science – University of ViennaP.Brezany Parallel and Distributed Systems Peter Brezany Institute for Software Science University.
Basic Concepts of Component- Based Software Development (CBSD) Model-Based Programming and Verification.
Syntax and Semantics CIS 331 Syntax: the form or structure of the expressions, statements, and program units. Semantics: the meaning of the expressions,
A-1 © 2000 UW CSE University of Washington Computer Programming I Lecture 21: Course Wrap-up and Look Ahead.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Programming Fundamentals Lecture No. 2. Course Objectives Objectives of this course are three fold 1. To appreciate the need for a programming language.
Parallel Processing I’ve gotta spend at least 10 hours studying for the IT 344 final! I’m going to study with 9 friends… we’ll be done in an hour.
Introduction Why are virtual machines interesting?
1 HPJAVA I.K.UJJWAL 07M11A1217 Dept. of Information Technology B.S.I.T.
Parallel Computing Presented by Justin Reschke
Lecture #4 Introduction to Data Parallelism and MapReduce CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
Parallel Programming Models EECC 756 David D. McGann 18 May, 1999.
Application of Design Patterns to Geometric Decompositions V. Balaji, Thomas L. Clune, Robert W. Numrich and Brice T. Womack.
These slides are based on the book:
Introduction to Parallel Processing
PARALLEL COMPUTING.
Types for Programs and Proofs
Lecture 1 Introduction Richard Gesick.
COMPUTATIONAL MODELS.
PROGRAMMING LANGUAGES
Distributed Shared Memory
CSCI-235 Micro-Computer Applications
Computer Programming.
IS301 – Software Engineering Dept of Computer Information Systems
Parallel Programming By J. H. Wang May 2, 2017.
Chapter 1 Reasons to study concepts of PLs Programming Domains
1.1 Reasons to study concepts of PLs
Parallel and Distributed Algorithms (CS 6/76501) Spring 2007
Application Development Theory
SVTRAININGS. SVTRAININGS Python Overview  Python is a high-level, interpreted, interactive and object-oriented scripting language. Python is designed.
Multi-Processing in High Performance Computer Architecture:
CS 3304 Comparative Languages Fall 2011
Guoliang Chen Parallel Computing Guoliang Chen
Summary Background Introduction in algorithms and applications
What is Concurrent Programming?
EE 4xx: Computer Architecture and Performance Programming
Principles of Good Design
Programming Languages, Preliminaries, History & Evolution
An Orchestration Language for Parallel Objects
Reasons To Study Programming Languages
Presentation transcript:

Models and Languages for Parallel Computation D. Skillicorn and D. Talia, ACM Computing Surveys, Vol. 30, No. 2, June 1998 Presented by Russell Zuck June 28, 2005

Overview Motivation Difficulties of parallel programming Evaluation criteria Categories of parallel programming models Summary of trends in model development A related perspective and observation

Motivation What programming models are out there? How well do they help solve some of the problems inherent with parallel programming? How difficult are they to use? What type of architecture is it used on?

Difficulties of Parallel Programming Many architectures Most are special purpose No standards Many languages/tools Optimizers don’t work well Wide range of programming paradigms Lack of skilled programmers Parallel thinking Lack of a substantial market

Evaluation Criteria Easy to program Software Development Methodology Decomposition Mapping Communication Synchronization Software Development Methodology How do you determine correctness? Sequential techniques of testing and debugging don’t extend well to parallel systems Large state space to test due to extra degree of freedom Testing limited to a couple of architectures

Evaluation Criteria Architecture independence Easy to understand Can a large number of people become proficient at it? Guaranteed performance How much will execution performance differ from one architecture to another? Cost Measures Execution time Processor utilization Development costs

Parallel Programming Models Differ in the degree of abstraction of concepts from the programmer Parallelism Decomposition Mapping Communication Synchronization The ideal model The programmer is not required to be explicitly involved in any of the above operations

Nothing Explicit Complete abstraction from parallel operations These exist only as “we are nearly there” types of languages Examples Optimizing compilers Haskell: High order functional language

Parallelism Explicit Techniques for identifying parallelism required Library functions Syntax extensions Example Fortran

Decomposition Explicit Programmer must specify the division of the problem into parallelizable pieces Library functions Example BSP (Bulk synchronous parallelism)

Mapping Explicit Programmer must specify the distribution of program pieces Example RPC Paralf: An annotated functional language

Communication Explicit Programmer responsible for interprocessor communication Example ABCL/1

Everything Explicit Almost no abstraction from the programmer Examples Orca PRAM MPI

Summary of Trends So which is the best compromise? As with most things, the middle of the road seems to be the best A model with a medium amount of programmer abstraction Trends in model development Work on models with little abstraction is diminishing Concentration of effort on models with midrange abstraction. Conceal some aspects of parallelism while preserving expressiveness Some hope still resides with highly abstract models…Don’t Hold Your Breath Too Long!!!

Another Perspective Speculation of the future Alternate solution As parallel machines regain popularity, more manufactures will be willing to produce them Eventually a handful of standard architectures will emerge to cover SIMD, MIMD (Shared Memory), and MIMD (Fixed Connection) Alternate solution Use a one or two development languages and develop virtual machines (middle-ware) for each type of architecture Similar to the Java paradigm

Programming Language VM1 VM2 VM3 MIMD (Shared) MIMD (Fixed) SIMD