UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department

Slides:



Advertisements
Similar presentations
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Advertisements

Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
1 Operating Systems, 122 Practical Session 5, Synchronization 1.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
CSE 451: Operating Systems Spring 2012 Module 7 Synchronization Ed Lazowska Allen Center 570.
Synchronization. Shared Memory Thread Synchronization Threads cooperate in multithreaded environments – User threads and kernel threads – Share resources.
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
CS444/CS544 Operating Systems Synchronization 2/16/2007 Prof. Searleman
02/23/2004CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Synchronization (other solutions …). Announcements Assignment 2 is graded Project 1 is due today.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Semaphores Questions answered in this lecture: Why are semaphores necessary? How are semaphores used for mutual exclusion? How are semaphores used for.
CS510 Concurrent Systems Introduction to Concurrency.
Implementing Synchronization. Synchronization 101 Synchronization constrains the set of possible interleavings: Threads “agree” to stay out of each other’s.
Chapter 28 Locks Chien-Chung Shen CIS, UD
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
CSE 153 Design of Operating Systems Winter 2015 Lecture 5: Synchronization.
CS 2200 Presentation 18b MUTEX. Questions? Our Road Map Processor Networking Parallel Systems I/O Subsystem Memory Hierarchy.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Dining Philosophers & Monitors Questions answered in this lecture: How to synchronize dining philosophers? What are monitors and condition variables? What.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
CS162 Section 2. True/False A thread needs to own a semaphore, meaning the thread has called semaphore.P(), before it can call semaphore.V() False: Any.
CSE 120 Principles of Operating
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Synchronization.
Atomic Operations in Hardware
Atomic Operations in Hardware
Chapter 5: Process Synchronization
CSCI 511 Operating Systems Chapter 5 (Part C) Monitor
Lecture 11: Mutual Exclusion
Chien-Chung Shen CIS/UD
Concurrency: Locks Questions answered in this lecture:
Jonathan Walpole Computer Science Portland State University
Lecture 2 Part 2 Process Synchronization
Coordination Lecture 5.
Implementing Mutual Exclusion
Implementing Mutual Exclusion
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2004 Module 6 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
Process Synchronization
CSE 451: Operating Systems Autumn 2009 Module 7 Synchronization
Process Synchronization
Process/Thread Synchronization (Part 2)
Sarah Diesburg Operating Systems CS 3430
Presentation transcript:

UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department CS 537 Introduction to Operating Systems Andrea C. Arpaci-Dusseau Remzi H. Arpaci-Dusseau Implementing Locks Questions answered in this lecture: Why use higher-level synchronization primitives? What is a lock? How can locks be implemented? When to use spin-waiting vs. blocking?

Synchronization Layering Build higher-level synchronization primitives in OS Operations that ensure correct ordering of instructions across threads Motivation: Build them once and get them right Don’t make users write entry and exit code Monitors Semaphores Locks Condition Variables Loads Test&Set Stores Disable Interrupts

Locks Goal: Provide mutual exclusion (mutex) Three common operations: Allocate and Initialize Pthread_mutex_t mylock = PTHREAD_MUTEX_INITIALIZER; Acquire Acquire exclusion access to lock; Wait if lock is not available Pthread_mutex_lock(&mylock); Release Release exclusive access to lock Pthread_mutex_unlock(&mylock);

Lock Example After lock has been allocated and initialized: Void deposit(int amount) { Pthread_mutex_lock(&mylock); balance += amount; Pthread_mutex_unlock(&mylock); } Allocate one lock for each bank account: Void deposit(int accountid, int amount) { Pthread_mutex_lock(&locks[accountid]); balance[accountid] += amount; Pthread_mutex_unlock(&locks[accountid]);

Implementing Locks: Version #1 Build locks using atomic loads and stores Typedef struct { bool lock[2] = {false, false}; int turn = 0; } lockT; Void acquire(lockT *l) { l->lock[tid] = true; l->turn = 1-tid; while (l->lock[1-tid] && l->turn==1-tid) /* wait */; } Void release (lockT *l) { l->lock[tid] = false; Disadvantages??

Implementing Locks: Version #2 Turn off interrupts for critical sections Prevent dispatcher from running another thread Code executes atomically Void acquire(lockT *l) { disableInterrupts(); } Void release(lockT *l) { enableInterrupts(); Disadvantages??

Implementing Locks: Version #3 Leverage atomic “Test of Lock” and “Set of Lock” Hardware instruction: TestAndSet addr val (TAS) Returns previous value of addr and sets value at addr to val Example: C=10; Old = TAS(&C, 15) Old == ?? C == ?? Typedef bool lockT; Void acquire(lockT *l) { while (TAS(l, true)) /* wait */; } Void release(lockT *l) { *l = false; Disadvantages??

Lock Implementation #4: Block when Waiting Typedef struct { bool lock = false; bool guard = false; queue q = queue_alloc(); } LockT; Void acquire(LockT *l) { while (TAS(&l->guard, true)); If (l->lock) { qadd(l->q, tid); l->guard = false; call dispatcher; } else { l->lock = true; } Void release(LockT *l) { if (qempty(l->q)) l->lock=false; else WakeFirstProcess(l->q);

How to stop Spin-Waiting? Option 1: Add sleep(time) to while(TAS(&guard,1)); Problems? Option 2: Add yield() Option 3: Don’t let thread give up CPU to begin with How? Why is this acceptable here?

Lock Implementation #5: Final Optimization Void acquire(LockT *l) { ?? while (TAS(&l->guard,true)); If (l->lock) { qadd(l->q, tid); l->guard = false; call dispatcher; } else { l->lock = true; } Void release(LockT *l) { ?? while (TAS(&l->guard, true)); if (qempty(l->q)) l->lock=false; else WakeFirstProcess(l->q); l->guard = false; }

Spin-Waiting vs Blocking Each approach is better under different circumstances Uniprocessor Waiting process is scheduled --> Process holding lock isn’t Waiting process should relinquish processor Associate queue of waiters with each lock Multiprocessor Waiting process is scheduled --> Process holding lock might be Spin or block depends on how long, t, before lock is released Lock released quickly --> Spin-wait Lock released slowly --> Block Quick and slow are relative to context-switch cost, C

When to Spin-Wait? When to Block? If know how long, t, before lock released, can determine optimal behavior How much CPU time is wasted when spin-waiting? How much wasted when block? What is the best action when t<C? When t>C? Problem: Requires knowledge of future

Two-Phase Waiting Theory: Bound worst-case performance When does worst-possible performance occur? Spin-wait for C then block --> Factor of 2 of optimal Two cases: T < C: optimal spin-waits for t; we spin-wait t too T > C: optimal blocks immediately (cost of C); we pay spin C then block (cost of 2 C) Example of competitive analysis