C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.

Slides:



Advertisements
Similar presentations
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Advertisements

 Read about Therac-25 at  [  [
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
Concurrency: mutual exclusion and synchronization Slides are mainly taken from «Operating Systems: Internals and Design Principles”, 8/E William Stallings.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Big Picture Lab 4 Operating Systems Csaba Andras Moritz.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Processes 1 CS502 Spring 2006 Processes Week 2 – CS 502.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrent Languages – Part 1 COMP 640 Programming Languages.
Understanding Operating Systems 1 Chapter 6 : Concurrent Processes What is Parallel Processing? Typical Multiprocessing Configurations Process Synchronization.
SEMAPHORE By: Wilson Lee. Concurrency Task Synchronization Example of semaphore Language Support.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Threaded Programming in Python Adapted from Fundamentals of Python: From First Programs Through Data Structures CPE 401 / 601 Computer Network Systems.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 11: Synchronization (Chapter 6, cont)
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
ProgrammingLanguages Programming Languages Parallel Programming languages.
Lecture 6: Monitors & Semaphores. Monitor Contains data and procedures needed to allocate shared resources Accessible only within the monitor No way for.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Processes. Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Agenda  Quick Review  Finish Introduction  Java Threads.
CE Operating Systems Lecture 8 Process Scheduling continued and an introduction to process synchronisation.
Big Picture Lab 4 Operating Systems C Andras Moritz
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Threaded Programming in Python
Processes and threads.
Process Management Process Concept Why only the global variables?
Sarah Diesburg Operating Systems COP 4610
Background on the need for Synchronization
Operating Systems (CS 340 D)
Chapter 6 Concurrent Processes
Intro to Processes CSSE 332 Operating Systems
Chapter 5: Process Synchronization
143a discussion session week 3
Concurrency: Mutual Exclusion and Synchronization
Midterm review: closed book multiple choice chapters 1 to 9
Threading And Parallel Programming Constructs
Background and Motivation
Concurrency: Mutual Exclusion and Process Synchronization
Subject : T0152 – Programming Language Concept
CS510 Operating System Foundations
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Operating Systems Concepts
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan

Background: Parallelism can provide a distinct way of conceptualizing problems. Modern processors can contain a number of functional units that can not be fully utilized without considering explicit parallelism. Operating systems have long embraced the concept of concurrent programming (more about this later). Applications are finding more and more use for concurrent/parallel operation (examples?).

Concepts: A process is a program in execution. A process has state, including resources like memory and open files, as well as the contents of its program counter. This is the process’s execution context. A concurrent program has more than one execution context. This type of program is also referred to as a multi-threaded program. A parallel program is a concurrent program with more than one thread executing simultaneously.

Concepts: A distributed program is a system of programs designed to execute simultaneously on a network of computers. Because the individual processes may be executed on different computers the do not share memory (as threads do). Distributed programs must communicate using some form of messaging. Multi-threaded programs most often communicate using shared (e.g. non local) variables.

Concurrency in Operating Systems: The earliest operating systems executed programs in batch mode, sequentially executing programs from start to finish. Later OSs were multiprogrammed, loading and executing several programs in an interleaved fashion. A scheduler forced execution to switch among the active processes (context switching). Modern OSs us interactive time-sharing, quickly switching among processes to give the appearance of simultaneous execution of many programs (even on a single cpu).

Concurrency in Programs: Concurrent execution of a program can use multiple processors within a single cpu or interleave a single processor using time slicing. In Java and Ada separate threads are applied to functions or methods. Starting a thread is different than the normal procedure for calling a function:  The invoking thread doesn’t wait for the new thread to complete.  When the new thread does complete control doesn’t return to the calling invoking thread.

States of a Thread Figure 11.1 Created but not yet ready to run Ready to run but needs a processor Actually executing on a processor Waiting to gain access to some resource Finished

Communication: All concurrent programs involve inter-thread communication or interaction:  Threads compete for exclusive access to shared resources (such as?).  Threads communicate to share data. A thread can communicate using:  Non-local shared variables (used by Java).  Message passing (used by Ada).  Access Parameters (used by Ada with message passing [like pointers]). Note that Ada uses a mechanism called a rendezvous to provide points of synchronization among threads.

Sharing Access: The fundamental problem in shared access is called a race condition. Race conditions can occur when operations on shared variables are not atomic. When two or more threads something like “load value, change value, store value” results become unpredictable. Some computers provide atomic test and set instructions to help avoid such problems (alpha). Programmatically, code that accesses a shared variable is termed a critical section.

Critical Sections: For a thread to safely execute a critical section some form of locking mechanism must be implemented. The locking mechanism ensures that only one thread at a time is executing within the critical section making it effectively atomic. When implementing locks you must beware of possible deadlock conditions (how can this happen?). It is also possible for locks to be unfair, giving preference to some threads and starving others.

Semaphores: Semaphores were invented by Dijkstra in Semaphores consist of a shared integer variable and an associated queue to hold blocked threads. There are two atomic operations defined on semaphores: P(s) – if s > 0 then set s = s – 1, else the thread blocks V(s) – if there is a thread blocked on s wake it, else set s = s + 1. If a semaphore only takes on the values 0 and 1 it is called a binary semaphore. If the semaphore can take on any non-negative integer value it is called a counting semaphore.

A word about Edsgar Dijkstra: Professor Edsger Wybe Dijkstra, a noted pioneer of the science and industry of computing, died after a long struggle with cancer on 6 August 2002 at his home in Neunen, the Netherlands. Dijkstra was the 1972 recipient of the ACM Turing Award, often viewed as the Nobel Prize for computing. He was particularly acerbic about the many sins he considered encouraged by the Basic programming language, which he said irreparably harmed young programmers, and wrote a famous paper: Go To Statement Considered Harmful. The operations get their name from the Dutch proberen (to test) and verhogen (to increment).

Simple Producer-Consumer Cooperation Using Semaphores Figure 11.2 With only one producer and one consumer we can use a pair of binary semaphores to create a critical section around the variable buffer.

Multiple Producers-Consumers Figure 11.3 With multiple producers and consumers we use counting semaphores to keep track of a circular buffer pool. As before we use binary semaphores to create a critical section protecting the buffer.

Next time… More Concurrent Programming