Concurrent Programming. 2 The Cunning Plan We’ll look into: –What concurrent programming is –Why you care –How it’s done We’re going to skim over *all*

Slides:



Advertisements
Similar presentations
Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
Advertisements

1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
HW/Study Guide. Synchronization Make sure you understand the HW problems!
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Concurrency: mutual exclusion and synchronization Slides are mainly taken from «Operating Systems: Internals and Design Principles”, 8/E William Stallings.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
Multithreaded Programs in Java. Tasks and Threads A task is an abstraction of a series of steps – Might be done in a separate thread – Java libraries.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Concurrency: Mutual Exclusion and Synchronization Why we need Mutual Exclusion? Classical examples: Bank Transactions:Read Account (A); Compute A = A +
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Concurrency: Deadlock and Starvation Chapter 6. Revision Describe three necessary conditions for deadlock Which condition is the result of the three necessary.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
CS 241 Section Week #4 (2/19/09). Topics This Section  SMP2 Review  SMP3 Forward  Semaphores  Problems  Recap of Classical Synchronization Problems.
Concurrency Recitation – 2/24 Nisarg Raval Slides by Prof. Landon Cox.
1 Concurrency: Deadlock and Starvation Chapter 6.
Pthread (continue) General pthread program structure –Encapsulate parallel parts (can be almost the whole program) in functions. –Use function arguments.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Semaphores. Readings r Silbershatz: Chapter 6 Mutual Exclusion in Critical Sections.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
The University of Adelaide, School of Computer Science
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Games Development 2 Concurrent Programming CO3301 Week 9.
Includes slides from course CS194 at UC Berkeley, by prof. Katherine Yelick Shared Memory Programming Pthreads: an overview Ing. Andrea Marongiu
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Threading Eriq Muhammad Adams J
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
More on Semaphores Andy Wang Operating Systems COP 4610 / CGS 5765.
Pthreads: A shared memory programming model
Synchronizing Threads with Semaphores
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 11: Synchronization (Chapter 6, cont)
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
CMSC 330: Organization of Programming Languages Threads.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
PThread Synchronization. Thread Mechanisms Birrell identifies four mechanisms commonly used in threading systems –Thread creation –Mutual exclusion (mutex)
Slides created by: Professor Ian G. Harris Operating Systems  Allow the processor to perform several tasks at virtually the same time Ex. Web Controlled.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Background on the need for Synchronization
Lecture 16: Readers-Writers Problem and Message Passing
Concurrency: Mutual Exclusion and Synchronization
Threading And Parallel Programming Constructs
Background and Motivation
Concurrency: Mutual Exclusion and Process Synchronization
CSCI1600: Embedded and Real Time Software
Lecture 16: Readers-Writers Problem and Message Passing
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
CSCI1600: Embedded and Real Time Software
Synchronization These notes introduce:
“The Little Book on Semaphores” Allen B. Downey
CMSC 202 Threads.
Presentation transcript:

Concurrent Programming

2 The Cunning Plan We’ll look into: –What concurrent programming is –Why you care –How it’s done We’re going to skim over *all* the interesting details

3 One-Slide Summary There are many different ways to do concurrent programming There can be more than one We need synchronization primitives (e.g. semaphores) to deal with shared resources Message passing is complicated

4

5 What? Concurrent Programming –using multiple threads on a single machine OS simulates concurrency or using multiple cores/processors –using message passing Memory is not shared between threads More general in terms of hardware requirements etc.

6 What (Shorter version) There are a million different ways to do concurrent programming We’ll focus on three: –co-begin blocks –threads –message passing

7 Concurrent Programming: Why? 1.Because it is intuitive for some problems (say you’re writing httpd) 2.Because we need better-than-sequential performance 3.Because the problem is inherently distributed (e.g. BitTorrent)

8 Coding Challenges How do you divide the problem across threads? –easy: matrix multiplication using threads –hard: heated plate using message passing –harder: n-body simulation for large n

9 One Slide on Co-Begin We want to execute commands simultaneously m’kay—solution: int x; int y; //... run-in-parallel{ functionA(&x) | functionB(&x,&y) } A B Main

10 Threads Most common in everyday applications Instead of a run-in-parallel block, we want explicit ways to create and destroy threads Threads can all see a program’s global variables (i.e. they share memory)

11 Some Syntax: Thread mythread = new Thread( new Runnable() { public void run() { // your code here } ); mythread.start(); mythread.join();

12 Some Syntax: Thread mythread = new Thread( new Runnable() { public void run() { // your code here } ); mythread.start(); void foo(int & x) { // your code here } //... int bar = 5; pthread_t my_id; pthread_create(&my_id, NULL, foo, (void *)&bar); //... pthread_join(my_id, NULL);

13 Example: Matrix Multiplication Given: Compute: 9 * 2 = * 5 = 35 4 * -3 = * 2 = 18 7 * 5 = 35 4 * -3 =

14 Matrix Multiplication ‘Analysis’ We have p = 4size(A) = (p, q) q = 3size(B) = (q, r) r = 4size(C) = (p, r) Complexity: p×r elements in C O(q) operations per element Note: calculating each element of C is independent from the other elements

15 Matrix Multiplication using Threads pthread_t threads[P][Q]; struct location locs[P][Q]; for (i = 0; i < P; ++i) { for (j = 0; j < R; ++j) { (locs[i][j]).row = i; (locs[i][j]).col = j; pthread_create( &threads[i][j], NULL, calc_cell, (void*)(&(locs[i][j])) ); } } for (i = 0; i < P; ++i) { for (j = 0; j < R; ++j) { pthread_join( &threads[i][j], NULL); } }

16 Matrix Multiplication using Threads for each element in C: create a thread: call the function 'calc_cell' for each created thread: wait until the thread finishes // Profit

17 Postmortem Relatively easy to parallellize: –matrices A and B are ‘read only’ –each thread writes to a unique entry in C –entries in C do not depend on each other What are some problems with this? –overhead of creating threads –use of shared memory

18 Synchronization So far, we have only covered how to create & destroy threads What else do we need? (See title)

19 Synchronization We want to do things like: –event A must happen before event B and –events A and B cannot occur simultaneously Is there a problem here? counter = counter + 1 Thread 1Thread 2

20 Semaphores A number n (initialized to some value) Can only increment sem.V() and decrement sem.P() n > 0 : P() doesn’t block n ≤ 0 : P() blocks V() unblocks some waiting process

21 More Semaphore Goodness Semaphores are straightforward to implement on most types of systems Easy to use for resource management (set n equal to the number of resources) Some additional features are common (e.g. bounded semaphores)

22 Semaphore Example Let’s try this again: counter = counter + 1 wes.V() wes.P() counter = counter + 1 Thread 1Thread 2 Semaphore wes = new Semaphore(0) // start threads 1 and 2 simultaneously Main

23 foo_a1 aArrived.V() bArrived.P() foo_a2 foo_b1 bArrived.V() aArrived.P() foo_b2 Semaphore Example 2 Suppose we want two threads to “meet up” at specific points in their code: foo_a1 bArrived.P() aArrived.V() foo_a2 foo_b1 aArrived.P() bArrived.V() foo_b2 Thread AThread B Semaphore aArrived = new Semaphore(0) Sempahore bArrived = new Semaphore(0) // start threads A and B simultaneously

24 Deadlock ‘Deadlock’ refers to a situation in which one or more threads is waiting for something that will never happen Theorem: You will, at some point in your life, write code that deadlocks

25 Readers/Writers Problem Let’s do a slightly bigger example Problem: –some finite buffer b –multiple writer threads (only one can write at a time) –multiple reader threads (many can read at a time) –can only read if no writing is happening

26 Readers/Writers Solution #1 int readers = 0 Semaphore mutex = new Semaphore(1) Semaphore roomEmpty = new Semaphore(1) Writers: roomEmpty.P() // write here roomEmpty.V()

27 Readers/Writers Solution #1 Readers: mutex.P() readers ++ if (readers == 1) roomEmpty.P() mutex.V() // read here mutex.P() readers -- if (readers == 0) roomEmpty.V() mutex.V()

28 Starvation Starvation occurs when a thread is continuously denied resources Not the same as deadlock: it might eventually get to run, but it needs to wait longer than we want In the previous example, a writer might ‘starve’ if there is a continuous onslaught of readers

29 Guiding Question Earlier, I said sem.V() unblocks some waiting thread If we don’t unblock in FIFO order, that means we could cause starvation Do we care?

30 Synchronization Summary We can use semaphores to enforce synchronization: –ordering –mutual exclusion –queuing There are other constructs as well See your local OS Prof

31 Message Passing Threads and co. rely on shared memory Semaphores make very little sense if they cannot be shared between n > 1 threads What about systems in which we can’t share memory?

32 Message Passing Threads Processes are created for us (they just exist) We can do the following: blocking_send(int destination, char * buffer, int size) blocking_receive(int source, char * buffer, int size)

33 Message Passing: Motivation We don’t care if threads run on different machines: –same machine - use virtual memory tricks to make messages very quick –different machines - copy and send over the network

34 Heated Plate Simulation Suppose you have a metal plate: Three sides are chilled to 273 K One side is heated to 373 K

35 Heated Plate Simulation Problem: Calculate the heat distribution after some time t: t=10t=30t=50

36 Heated Plate Simulation We model the problem by dividing the plate into small squares: For each time step, take the average of a square’s four neighbors

37 Heated Plate Simulation Problem: need to communicate for each time step Sending messages is expensive… P1 P2 P3

38 Heated Plate Simulation Problem: need to communicate for each time step Sending messages is expensive… Solution: send fewer, larger messages, limit longest message path P1 P2 P3

39 How to cause deadlock in MPI char * buff = "Goodbye" char * buff2 = new char(15); send(2, buff, 8) recv(2, buff, 15) char * buff = ", cruel world\n“ char * buff2 = new char(8); send(1, buff, 15) recv(1, bugg, 8) Process 1Process 2

40 Postmortem Our heated plate solution does not rely on shared memory Sending messages becomes complicated in a hurry (easy to do the wrong thing) We need to reinvent the wheel constantly for different interaction patterns

41 Example Summary Matrix Multiplication Example –used threads and implicitly shared memory –this is common for everyday applications (especially useful for servers, GUI apps, etc.) Heated Plate Example –used message passing –this is more common for big science and big business (also e.g. peer-to-peer) –it is not used to code your average firefox

42 Guiding Question If you’re writing a GUI app (let’s call it “firefox”) would you prefer to use threads or message passing?

43 Summary Looked at three ways to do concurrent programming: –co-begin –threads, implicitly shared memory, semaphores –message passing Concerns of scheduling, deadlock, starvation

44 Summary