Background and Motivation

Slides:



Advertisements
Similar presentations
Symmetric Multiprocessors: Synchronization and Sequential Consistency.
Advertisements

Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Concurrent Programming James Adkison 02/28/2008. What is concurrency? “happens-before relation – A happens before B if A and B belong to the same process.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Concurrency: Mutual Exclusion and Synchronization Why we need Mutual Exclusion? Classical examples: Bank Transactions:Read Account (A); Compute A = A +
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Processes 1 CS502 Spring 2006 Processes Week 2 – CS 502.
1 Organization of Programming Languages-Cheng (Fall 2004) Concurrency u A PROCESS or THREAD:is a potentially-active execution context. Classic von Neumann.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Operating Systems Lecture 2 Processes and Threads Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
Copyright © 2005 Elsevier Chapter 12 :: Concurrency Programming Language Pragmatics Michael L. Scott.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
PZ12B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ12B - Synchronization and semaphores Programming Language.
Synchronization and semaphores Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Concurrency Control 1 Fall 2014 CS7020: Game Design and Development.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Operating System Chapter 5. Concurrency: Mutual Exclusion and Synchronization Lynn Choi School of Electrical Engineering.
Slides created by: Professor Ian G. Harris Operating Systems  Allow the processor to perform several tasks at virtually the same time Ex. Web Controlled.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
Copyright © 2009 Elsevier Chapter 12 :: Concurrency Programming Language Pragmatics Michael L. Scott.
1 5-High-Performance Embedded Systems using Concurrent Process (cont.)
Agenda  Quick Review  Finish Introduction  Java Threads.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
Interprocess Communication Race Conditions
CSE 120 Principles of Operating
Background on the need for Synchronization
Atomic Operations in Hardware
Atomic Operations in Hardware
Computer Engg, IIT(BHU)
5-High-Performance Embedded Systems using Concurrent Process (cont.)
143a discussion session week 3
Chapter 26 Concurrency and Thread
Midterm review: closed book multiple choice chapters 1 to 9
Shared Memory Programming
Grades.
Chapter 12 :: Concurrency
Implementing Mutual Exclusion
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Process Synchronization
Implementing Mutual Exclusion
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CS510 Operating System Foundations
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Atomicity, Mutex, and Locks
EECE.4810/EECE.5730 Operating Systems
Process/Thread Synchronization (Part 2)
CSE 542: Operating Systems
Synchronization and semaphores
Presentation transcript:

Background and Motivation A PROCESS or THREAD is a potentially-active execution context Classic von Neumann (stored program) model of computing has single thread of control Parallel programs have more than one A process can be thought of as an abstraction of a physical PROCESSOR Copyright © 2005 Elsevier

Background and Motivation Processes/Threads can come from multiple CPUs kernel-level multiplexing of single physical machine language or library level multiplexing of kernel-level abstraction They can run in true parallel unpredictably interleaved run-until-block Most work focuses on the first two cases, which are equally difficult to deal with Copyright © 2005 Elsevier

Background and Motivation Two main classes of programming notation synchronized access to shared memory message passing between processes that don't share memory Both approaches can be implemented on hardware designed for the other, though shared memory on message-passing hardware tends to be slow Copyright © 2005 Elsevier

Background and Motivation Race conditions A race condition occurs when actions in two processes are not synchronized and program behavior depends on the order in which the actions happen Race conditions are not all bad; sometimes any of the possible program outcomes are ok (e.g. workers taking things off a task queue) Copyright © 2005 Elsevier

Background and Motivation Race conditions (we want to avoid race conditions): Suppose processors A and B share memory, and both try to increment variable X at more or less the same time Very few processors support arithmetic operations on memory, so each processor executes LOAD X INC STORE X Suppose X is initialized to 0. If both processors execute these instructions simultaneously, what are the possible outcomes? could go up by one or by two Copyright © 2005 Elsevier

Background and Motivation Copyright © 2005 Elsevier

Background and Motivation Synchronization SYNCHRONIZATION is the act of ensuring that events in different processes happen in a desired order Synchronization can be used to eliminate race conditions In our example we need to synchronize the increment operations to enforce MUTUAL EXCLUSION on access to X Most synchronization can be regarded as either Mutual exclusion (making sure that only one process is executing a CRITICAL SECTION [touching a variable, for example] at a time), or as CONDITION SYNCHRONIZATION, which means making sure that a given process does not proceed until some condition holds (e.g. that a variable contains a given value) Copyright © 2005 Elsevier

Background and Motivation We do NOT in general want to over-synchronize That eliminates parallelism, which we generally want to encourage for performance Basically, we want to eliminate "bad" race conditions, i.e., the ones that cause the program to give incorrect results Copyright © 2005 Elsevier

Background and Motivation Copyright © 2005 Elsevier

Concurrent Programming Fundamentals SCHEDULERS give us the ability to "put a thread/process to sleep" and run something else on its process/processor start with coroutines make uniprocessor run-until-block threads add preemption add multiple processors Copyright © 2005 Elsevier

Concurrent Programming Fundamentals Copyright © 2005 Elsevier

Concurrent Programming Fundamentals Copyright © 2005 Elsevier

Condition synchronization with atomic reads and writes is easy Shared Memory Condition synchronization with atomic reads and writes is easy You just cast each condition in the form of "location X contains value Y" and you keep reading X in a loop until you see what you want Mutual exclusion is harder Much early research was devoted to figuring out how to build it from simple atomic reads and writes Dekker is generally credited with finding the first correct solution for two processes in the early 1960s Dijkstra published a version that works for N processes in 1965 Peterson published a much simpler two-process solution in 1981 Copyright © 2005 Elsevier

Message Passing Sending no-wait synchronization remote invocation maximal concurrency buffering and flow control a problem error reporting a BIG problem synchronization fixes above problem requires high-level acks (inefficient on many substrates) remote invocation matches common algorithms no more expensive than synchronization send on most substrates (cheaper than pair of synchronization sends) broadcast and multicast Copyright © 2005 Elsevier

RPC = remote invocation send and implicit receipt Message Passing Connecting static and dynamic naming naming processes, operations, channel abstractions Receiving explicit and implicit receipt explicit - some running process says "receive“ implicit - message causes a new process to be created RPC = remote invocation send and implicit receipt Copyright © 2005 Elsevier