© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.

Slides:



Advertisements
Similar presentations
SE-292 High Performance Computing
Advertisements

Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Slide 2-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 2 Using the Operating System 2.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.
Introduction to Operating Systems CS-2301 B-term Introduction to Operating Systems CS-2301, System Programming for Non-majors (Slides include materials.
Avishai Wool lecture Priority Scheduling Idea: Jobs are assigned priorities. Always, the job with the highest priority runs. Note: All scheduling.
1 Concurrent and Distributed Systems Introduction 8 lectures on concurrency control in centralised systems - interaction of components in main memory -
Concurrency CS 510: Programming Languages David Walker.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
1 Sharing Objects – Ch. 3 Visibility What is the source of the issue? Volatile Dekker’s algorithm Publication and Escape Thread Confinement Immutability.
1 Chapter 4 Threads Threads: Resource ownership and execution.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
1 OS & Computer Architecture Modern OS Functionality (brief review) Architecture Basics Hardware Support for OS Features.
To GPU Synchronize or Not GPU Synchronize? Wu-chun Feng and Shucai Xiao Department of Computer Science, Department of Electrical and Computer Engineering,
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
© 2009 Mathew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.
Advances in Language Design
What is Concurrent Programming? Maram Bani Younes.
Computer System Architectures Computer System Software
1 Chapter Client-Server Interaction. 2 Functionality  Transport layer and layers below  Basic communication  Reliability  Application layer.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.
Concurrency, Mutual Exclusion and Synchronization.
Operating Systems Lecture 2 Processes and Threads Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
Chapter 2 (PART 1) Light-Weight Process (Threads) Department of Computer Science Southern Illinois University Edwardsville Summer, 2004 Dr. Hiroshi Fujinoki.
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
GPU Architecture and Programming
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Internet Software Development Controlling Threads Paul J Krause.
CS333 Intro to Operating Systems Jonathan Walpole.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Processor Architecture
Introduction to Operating Systems and Concurrency.
Concurrency Properties. Correctness In sequential programs, rerunning a program with the same input will always give the same result, so it makes sense.
Threads-Process Interaction. CONTENTS  Threads  Process interaction.
Threads. Readings r Silberschatz et al : Chapter 4.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Slides created by: Professor Ian G. Harris Operating Systems  Allow the processor to perform several tasks at virtually the same time Ex. Web Controlled.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
SMP Basics KeyStone Training Multicore Applications Literature Number: SPRPxxx 1.
Agenda  Quick Review  Finish Introduction  Java Threads.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Chapter 3: Process Concept
Topic 3 (Textbook - Chapter 3) Processes
Distributed Shared Memory
Background on the need for Synchronization
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
Process Management Presented By Aditya Gupta Assistant Professor
Chapter 3: Process Concept
Computer Engg, IIT(BHU)
Chapter 4: Processes Process Concept Process Scheduling
Multiprocessor Introduction and Characteristics of Multiprocessor
Chapter 15, Exploring the Digital Domain
Chapter 26 Concurrency and Thread
Shared Memory Programming
Threads Chapter 4.
Background and Motivation
COMP60611 Fundamentals of Parallel and Distributed Systems
Multithreaded Programming
Concurrency: Mutual Exclusion and Process Synchronization
Lecture 24: Multiprocessors
Chapter 6: Synchronization Tools
Foundations and Definitions
Operating Systems Concepts
Presentation transcript:

© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig E Rasmussen

© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 2 Chapter 2 Objectives Establish terminology for discussing concurrency. Introduce the topic of dependencies. Definite important fundamental concepts.

Outline Terminology Dependencies Atomicity Mutual exclusion and critical sections Thread safety © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 3

Units of execution Reasoning about concurrency and parallelism requires us to have terms to discuss that which is executing. Two common types of execution unit: –Processes –Threads © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 4

Processes Common OS abstraction. Represents a running program. Private address space relative to other processes. Contextual information such as register set, IO handles, etc… © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 5

Threads A process contains one or more threads. Threads have private register set and stack. Threads within a process share memory with each other. –Threads may also have private per-thread “thread local memory”. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 6

Modern operating systems The line between threads and processes can be blurry. –Often both are implemented on top of the same OS abstraction, kernel threads. The critical distinction for many discussions is data sharing. –Processes do not share address space without explicit assistance. –Threads within a process share address space. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 7

Terminology In the concurrent programming world, different projects use different terminology for execution units. –Threads –Tasks –Processes –Activities Generally a good idea to look up precisely what a term means in whatever context you encounter it in. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 8

Parallel vs. concurrent What is the difference between a parallel and a concurrent program? © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 9

Parallel vs. concurrent Consider two units of execution that are started at the same time. If they run in parallel, then over any time interval during their execution, both may be executing at the same time. If they run concurrently, they may execute in parallel, or they may be sequentialized where only one makes progress at any given point in time and their execution is interleaved. –Multitasking operating systems have exploited this for many years. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 10

Parallel vs. concurrent Consider three programs executing concurrently. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 11

Shared vs. distributed memory Common distinction when considering parallel and networked computers. Consider a memory address space and a set of processing elements (CPUs or single CPU cores). © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 12

Shared memory Every processing unit can directly address every element of the address space. Examples include: –Multicore CPUs Each core can see all of the RAM in the machine. –Shared memory parallel systems Traditional parallel computers, such as servers. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 13

Distributed memory Processing elements can directly address only a proper subset of the address space. –In order to access memory outside of this subset, processing elements must either: Be assisted by another processing element that can directly address the memory. Use specialized mechanisms (such as network hardware) to access the memory. We distinguish directly from indirectly accessible memory as local versus remote. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 14

Hybrid approaches It is common to see clusters of computers that use both approaches. Nodes within the clusters based on shared memory multicore processors. Networked nodes that use message passing to allow processors to access data residing in remote address spaces. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 15

Hardware trends There are trends in hardware that have characteristics of distributed memory within a single machine. Consider specialized memory on accelerator cards such as Graphics Processing Units (GPUs). –Explicit memory copies required between CPU and GPU. –Neither side can necessarily directly address the entire address space that spans memory on CPU host and GPU device side. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 16

Outline Terminology Dependencies Atomicity Mutual exclusion and critical sections Thread safety © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 17

Dependencies Dependencies are a critical property of any system that impacts concurrency. –Even impacts real-world activities like cooking. A dependency is a state (either of data or control) that must be reached before a part of a program can execute. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 18

Simple example Both x and y must be defined before they can be read and used in the computation of z. Therefore z depends on the assignment statements that associate values with x and y. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 19 x=6; y=7; z=x+y;

Dependencies limit parallelism If part of a program (a “subprogram”) depends on another subprogram, then the dependent portion must wait until that which it depends on is complete. –The dependency dictates sequentialization of the subprograms. If no such dependency exists, the subprograms can execute in parallel. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 20

Bernstein’s conditions We can formalize this via Bernstein’s conditions. Consider a subprogram P. Let IN(P) represent the set of memory locations (including registers) or variables that P uses as input by reading from them. Let OUT(P) represent a similar set of locations that P uses as output by writing to them. We can use these sets to determine if two subprograms P1 and P2 are dependent, and therefore whether or not they can execute in parallel. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 21

Bernstein’s conditions Given two subprograms P1 and P2, their execution in parallel is equivalent to their execution in sequence if the following conditions hold. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 22

Outline Terminology Dependencies Atomicity Mutual exclusion and critical sections Thread safety © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 23

Atomicity Atom derived from Greek word for indivisible. Often we write programs that include sequences of operations that we would like to behave as though they were a single indivisible operation. –Classic example: transferring money between two bank accounts. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 24

Atomicity Atomicity is important in a concurrent system because of parallelism and interleaving. –The program cannot reason about what other units of execution may be doing at any given time. Chaos may ensue if other units of execution see intermediate state during execution of sequence of operations intended to be atomic. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 25

Intermediate results Consider a shared counter. We assume incrementing the counter is an atomic operation. If we do not, we could miss updates. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 26 shared int counter; private int x; x = counter; x = x+1; counter = x;

Missed update Poor interleaving of concurrent threads of execution can result in missed updates if atomicity isn’t enforced. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 27

Atomicity In chapter 3, we discuss concurrency control constructs that have been invented to support atomicity of complex sequences of operations. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 28

Outline Terminology Dependencies Atomicity Mutual exclusion and critical sections Thread safety © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 29

Mutual exclusion As seen in the shared counter example, the reason we wish to have the sequence execute atomically is to ensure no other threads of execution can see intermediate or invalid data. One (but not the only) method for ensuring atomicity is to guarantee mutual exclusion within the sequence. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 30

Mutual exclusion Sequences of operations that can reveal invalid or intermediate data are referred to as critical sections. In the counter example, not only would this mean the increment operator would be a critical section, but… –Any other operation that modifies the counter would be. –As would any operation reading the counter. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 31

Mutual exclusion Mutual exclusion refers to ensuring that one and only one thread of execution be in a critical section at any point in time. Concurrency control mechanisms can be used to implement this. Mutual exclusion is a popular term, but is not the only way to build concurrent code that is correct. We will encounter others soon. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 32

Outline Terminology Dependencies Atomicity Mutual exclusion and critical sections Thread safety © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 33

Thread safety A common term to encounter in programming is thread safety. Typically used to describe how a programmer can expect a subroutine, object, or data structure to behave in a concurrent context. –A thread safe implementation will behave as expected in the presence of multiple threads of execution. –A thread unsafe implementation may behave incorrectly and unpredictably. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 34

Reentrant code A related term is reentrant. A reentrant routine (or region of code) may be entered by a thread of execution even if another thread has already started executing it at the same time. Non-reentrant code typically relies on state, such as global variables, that are not intended to be shared by multiple threads of execution. © 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 35