PARALLEL PROGRAMMING ABSTRACTIONS 6/16/2010 Parallel Programming Abstractions 1.

Slides:



Advertisements
Similar presentations
Optional Static Typing Guido van Rossum (with Paul Prescod, Greg Stein, and the types-SIG)
Advertisements

Computer Science 320 Clumping in Parallel Java. Sequential vs Parallel Program Initial setup Execute the computation Clean up Initial setup Create a parallel.
James Kolpack, InRAD LLC popcyclical.com. CodeStock is proudly partnered with: Send instant feedback on this session via Twitter: Send a direct message.
Imperative Programming with Dependent Types Hongwei Xi University of Cincinnati.
U NIVERSITY OF D ELAWARE C OMPUTER & I NFORMATION S CIENCES D EPARTMENT Optimizing Compilers CISC 673 Spring 2009 Potential Languages of the Future Chapel,
Copyright © 2012 Pearson Education, Inc. Chapter 9 Delegates and Events.
CSE 1302 Lecture 21 Exception Handling and Parallel Programming Richard Gesick.
Threads in C# Threads in C#.
Type Checking.
CS 4800 By Brandon Andrews.  Specifications  Goals  Applications  Design Steps  Testing.
Delegates and Events Tom Roeder CS fa. Motivation – Function Pointers Treat functions as first-class objects eg. Scheme: (map myfunc somelist)
50.003: Elements of Software Construction Week 5 Basics of Threads.
C++ Functions. 2 Agenda What is a function? What is a function? Types of C++ functions: Types of C++ functions: Standard functions Standard functions.
Language Evaluation Criteria
MT311 Java Application Programming and Programming Languages Li Tak Sing ( 李德成 )
Modern Concurrency Abstractions for C# by Nick Benton, Luca Cardelli & C´EDRIC FOURNET Microsoft Research.
Extension Methods Programming in C# Extension Methods CSE Prof. Roger Crawfis.
10/1/2015© Hal Perkins & UW CSEG-1 CSE P 501 – Compilers Intermediate Representations Hal Perkins Autumn 2009.
CSC3315 (Spring 2009)1 CSC 3315 Programming Languages Hamid Harroud School of Science and Engineering, Akhawayn University
Oct Multi-threaded Active Objects Ludovic Henrio, Fabrice Huet, Zsolt Istvàn June 2013 –
More with Methods (parameters, reference vs. value, array processing) Corresponds with Chapters 5 and 6.
Putting it all together: LINQ as an Example. The Problem: SQL in Code Programs often connect to database servers. Database servers only “speak” SQL. Programs.
Advanced C#, part IV Niels Hallenberg IT University of Copenhagen (With thanks to Peter Sestoft and Kasper Østerbye) BAAAP – Spring 2009.
Notice: Changed TA Office hour Thursday 11am-1pm  noon-2pm.
Programming Language C++ Xulong Peng CSC415 Programming Languages.
Introduction to C# C# is - elegant, type-safe, object oriented language enabling to build applications that run on the.NET framework - types of applications.
Chapter 5: Programming Languages and Constructs by Ravi Sethi Activation Records Dolores Zage.
Chapter 13 Recursion. Learning Objectives Recursive void Functions – Tracing recursive calls – Infinite recursion, overflows Recursive Functions that.
1162 JDK 5.0 Features Christian Kemper Principal Architect Borland.
Lecture 2 Foundations and Definitions Processes/Threads.
Session 08 Module 14: Generics and Iterator Module 15: Anonymous & partial class & Nullable type.
Hoang Anh Viet Hà Nội University of Technology Chapter 1. Introduction to C# Programming.
Lecture 21 Parallel Programming Richard Gesick. Parallel Computing Parallel computing is a form of computation in which many operations are carried out.
CSE 425: Data Types I Data and Data Types Data may be more abstract than their representation –E.g., integer (unbounded) vs. 64-bit int (bounded) A language.
Functional Programming With examples in F#. Pure Functional Programming Functional programming involves evaluating expressions rather than executing commands.
Lecture 20: Parallelism & Concurrency CS 62 Spring 2013 Kim Bruce & Kevin Coogan CS 62 Spring 2013 Kim Bruce & Kevin Coogan Some slides based on those.
Slides created by: Professor Ian G. Harris Hello World #include main() { printf(“Hello, world.\n”); }  #include is a compiler directive to include (concatenate)
Data Parallelism Task Parallel Library (TPL) The use of lambdas Map-Reduce Pattern FEN 20141UCN Teknologi/act2learn.
Inside LINQ to Objects How LINQ to Objects work Inside LINQ1.
FEN 2014UCN Teknologi/act2learn1 Higher order functions Observer Pattern Delegates Events Visitor Pattern Lambdas and closures Lambdas in libraries.
Types and Programming Languages Lecture 11 Simon Gay Department of Computing Science University of Glasgow 2006/07.
CMSC 202 Advanced Section Classes and Objects: Object Creation and Constructors.
Winter 2006CISC121 - Prof. McLeod1 Stuff No stuff today!
MT311 Java Application Development and Programming Languages Li Tak Sing ( 李德成 )
IAP C# 2011 Lecture 2: Delegates, Lambdas, LINQ Geza Kovacs.
MIT-AITI: Functions Defining and Invoking Functions Functions as Data Function Scope: The call Object Function Arguments: The arguments objects Function.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Session 02 Module 3: Statements and Operators Module 4: Programming constructs Module 5: Arrays.
C Part 1 Computer Organization I 1 August 2009 © McQuain, Feng & Ribbens A History Lesson Development of language by Dennis Ritchie at Bell.
Sorting – Lecture 3 More about Merge Sort, Quick Sort.
Functional Processing of Collections (Advanced) 6.0.
Java Thread Programming
Lambda Expressions By Val Feldsher.
CS 326 Programming Languages, Concepts and Implementation
Computer Engg, IIT(BHU)
Generics, Lambdas, Reflections
Functional Processing of Collections (Advanced)
Functional Programming with Java
Data Types.
.NET and .NET Core 5.2 Type Operations Pan Wuming 2016.
Functions I Creating a programming with small logical units of code.
Conditional Statements
CISC/CMPE320 - Prof. McLeod
Imperative Data Parallelism (Correctness)
Chapter 4: Threads & Concurrency
Foundations and Definitions
Functions I Creating a programming with small logical units of code.
Lecture 20 Parallel Programming CSE /8/2019.
SPL – PS1 Introduction to C++.
CMSC 202 Constructors Version 9/10.
Presentation transcript:

PARALLEL PROGRAMMING ABSTRACTIONS 6/16/2010 Parallel Programming Abstractions 1

Tasks vs Threads Similar but not the same. 6/16/2010Parallel Programming Abstractions 2 h/w processors Operating System Threads Task Scheduler Tasks

Tasks vs Threads: Main Differences Tasks need not run concurrently The task scheduler can schedule them on the same thread 6/16/2010Parallel Programming Abstractions 3

Tasks vs Threads: Main Differences Tasks need not run concurrently The task scheduler can schedule them on the same thread Tasks do not have fairness guarantees 6/16/2010Parallel Programming Abstractions 4 while(t == 0); Write(“hello”); while(t == 0); Write(“hello”); t = 1;

Tasks vs Threads: Main Differences Tasks need not run concurrently The task scheduler can schedule them on the same thread Tasks do not have fairness guarantees Tasks are cheaper than threads Have smaller stacks Do not pre-allocate OS resources 6/16/2010Parallel Programming Abstractions 5

Generating Tasks from Programs You don’t want programmers to explicitly create tasks Like assembly level programming Instead: Design programming language constructs that capture programmers intent Compiler converts these constructs to tasks Languages with the following features are very convenient for this purpose Type inference Generics First order anonymous functions (lamdas/delegates) 6/16/2010Parallel Programming Abstractions 6

First-order functions: C# Lambda Expressions Syntax (input parameters) => expression Is similar to : &anon_foo // where foo is declared elsewhere anon_foo(input parameters){ return expression;} Examples: x => x (x,y) => x==y (int x, string s) => s.Length > x () => { return SomeMethod()+1; } Func myFunc = x => x == 5; bool result = myFunc(4); Parallel Programming Abstractions 7 6/16/2010

Sequential Merge Sort With Lambdas 6/16/2010Parallel Programming Abstractions 8 MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; var f = (l,h) => { MergeSort(a, l, h);} f(low, mid-1); f(mid, high); Merge(a, low, mid, hi); } MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; var f = (l,h) => { MergeSort(a, l, h);} f(low, mid-1); f(mid, high); Merge(a, low, mid, hi); }

Things to Know about C# Lambdas Lambda is an expression (with no type) Conversion to a delegate type Type inference for parameters Capture of free variables Locations referenced by free variables are converted to be on the heap (“boxed”) Parallel Programming Abstractions 9 6/16/2010

The Task Abstraction 6/16/2010Parallel Programming Abstractions 10 delegate void Action(); class Task { Task( Action a ); void Wait(); // called by the WSQ scheduler void Execute(); } delegate void Action(); class Task { Task( Action a ); void Wait(); // called by the WSQ scheduler void Execute(); }

Merge Sort With Tasks 6/16/2010Parallel Programming Abstractions 11 MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; Task left = new Task( delegate{ MergeSort(a, low, mid); } ); Task right = new Task( delegate{ MergeSort(a, mid, hi); } ); left.Wait(); right.Wait(); Merge(a, low, mid-1, hi); } MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; Task left = new Task( delegate{ MergeSort(a, low, mid); } ); Task right = new Task( delegate{ MergeSort(a, mid, hi); } ); left.Wait(); right.Wait(); Merge(a, low, mid-1, hi); }

Parallel.Invoke Invokes all input actions in parallel Waits for all of them to finish 6/16/2010Parallel Programming Abstractions 12 static void Invoke(params Action[] actions);

Merge Sort With Parallel.Invoke 6/16/2010Parallel Programming Abstractions 13 MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; Paralle.Invoke{ () => { MergeSort(a, low, mid-1); } () => { MergeSort(a, mid, hi); } } Merge(a, low, mid, hi); } MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; Paralle.Invoke{ () => { MergeSort(a, low, mid-1); } () => { MergeSort(a, mid, hi); } } Merge(a, low, mid, hi); }

Compare with Sequential Version 6/16/2010Parallel Programming Abstractions 14 MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; { MergeSort(a, low, mid-1); } { MergeSort(a, mid, hi); } Merge(a, low, mid, hi); } MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; { MergeSort(a, low, mid-1); } { MergeSort(a, mid, hi); } Merge(a, low, mid, hi); }

Data Parallelism Sometimes you want to perform the same computation on all elements of a collection For every string in an array, check if it contains “foo” 6/16/2010Parallel Programming Abstractions 15

Parallel.For Iterates a variable i from lower from to upper Calls the delegate with i as the parameter in parallel 6/16/2010Parallel Programming Abstractions 16 For(int lower, int upper, delegate int (int) body);

Parallel.For 6/16/2010Parallel Programming Abstractions 17 // sequential for for(int i=0; i<n; i++){ if(a[i].Contains(“foo”)){DoSomething(a[i]);} } //Parallel for Parallel.For(0, n, (i) => { if(a[i].Contains(“foo”)){DoSomething(a[i]);} }); // sequential for for(int i=0; i<n; i++){ if(a[i].Contains(“foo”)){DoSomething(a[i]);} } //Parallel for Parallel.For(0, n, (i) => { if(a[i].Contains(“foo”)){DoSomething(a[i]);} });

A; Parallel.For(0, N, m: i => { B; }); C; m(0)m(1) m(N-1) … The DAG created by Parallel.For A C Parallel Programming Abstractions 18 6/16/2010

Paralle.ForEach Same as Parallel.For, but iterates over elements of a collection in parallel 6/16/2010Parallel Programming Abstractions 19 ForEach (IEnumerable source, delegate void (T) body);

Advantage of High-Level Abstractions Makes it easy to add parallelism Explore and experiment with different parallelization strategies Language Compiler/Runtime can perform performance optimizations Efficiently create tasks Efficiently distribute tasks to available cores Tight integration with scheduler 6/16/2010Parallel Programming Abstractions 20

Example Partitioning on Two Cores Parallel Performance 21 6/16/2010

Partitioning on Four Cores Parallel Performance 22 6/16/2010

Advantage of High-Level Abstractions Makes it easy to add parallelism Explore and experiment with different parallelization strategies Language Compiler/Runtime can perform performance optimizations Efficiently create tasks Efficiently distribute tasks to available cores Tight integration with scheduler Provide programmatic features Exceptions Cancelling running tasks 6/16/2010Parallel Programming Abstractions 23

Semantics of Parallel Constructs What does this do: 6/16/2010Parallel Programming Abstractions 24 Paralle.Invoke{ () => { WriteLine(“Hello”); } () => { WriteLine(“World”); } } Paralle.Invoke{ () => { WriteLine(“Hello”); } () => { WriteLine(“World”); } }

Semantics of Parallel Constructs What does this do: Compare with 6/16/2010Parallel Programming Abstractions 25 Paralle.Invoke{ () => { WriteLine(“Hello”); } () => { WriteLine(“World”); } } Paralle.Invoke{ () => { WriteLine(“Hello”); } () => { WriteLine(“World”); } } { WriteLine(“Hello”); } { WriteLine(“World”); } { WriteLine(“Hello”); } { WriteLine(“World”); }

Semantics of Parallel Constructs By writing this program You are telling the compiler that both outcomes are acceptable 6/16/2010Parallel Programming Abstractions 26 Paralle.Invoke{ () => { WriteLine(“Hello”); } () => { WriteLine(“World”); } } Paralle.Invoke{ () => { WriteLine(“Hello”); } () => { WriteLine(“World”); } }

Correctness Criteria Given a DAG, any linearization of the DAG is an acceptable execution 6/16/2010Parallel Programming Abstractions 27

Correctness Criteria Simple criterion: Every task operates on separate data No dependencies other than the edges in the DAG We will look at more complex criteria later in the course. 6/16/2010Parallel Programming Abstractions 28