Presentation is loading. Please wait.

Presentation is loading. Please wait.

Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.

Similar presentations


Presentation on theme: "Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong."— Presentation transcript:

1 Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong with the code to the right? Consumer Producer

2 Race Condition count++, count-- may not use one machine instruction –register1=count, register1=register1+1, count=register1 –register2=count, register2=register2–1, count=register2 Suppose: P= producer, C= consumer, and count = 5 –P: register = count // 5 –P: register = register + 1 //6 –C: register = count //5 –C: register = register – 1 //4 –P: count = register //6 –C: count = register //4 Questions –What is the final value for count? –What other possibilities? –What should be the correct value? A Condition where the outcome depends on execution order

3 Critical Sections We cannot assume Process execution speed, although we can assume they all execute at some non zero speed Which process executes next Process execution order A block of instructions that must be protected from concurrent execution Race Condition: Concurrent access to shared data where the output depends on the order of execution

4 Critical-Section Solutions 1.Mutual Exclusion – There is a mechanism to limit (normally one) of processes that can execute in a critical section at any time 2.Progress - If no process is executing in its critical section, processes waiting to enter cannot wait definitely 3.Bounded Waiting - A process cannot be preempted by other processes from entering a critical section an infinite number of times. We need a protocol to guarantee the following: No assumptions can be made regarding process speed or scheduling order

5 Peterson's Two Process Solution Assume atomic instructions Processes share: –int turn; –Boolean flag[2] turn: indicates whose turn it is. flag –indicates if a process is ready to enter –flag[i] = true implies that process P i is ready! Does Peterson satisfy mutual exclusion, progress, bounded wait? Spinning or 'busy wait' is when there is no blocking

6 Hardware Solutions Hardware atomic instructions –test and set –swap –increment and test if zero Disable interrupts –Could involve loss of data –Won't work on multiprocessors without inefficient message passing Atomic operation: One that cannot be interrupted. A pending interrupt will not occur till the instruction completes

7 Simulate a Hardware Solution public class HardwareData { private boolean value = false; public HardwareData(boolean value) { this.value = value; } public void release(boolean vaue) { this.value = value; } public synchronized boolean lock() { return value; } public synchronized boolean getAndSet(boolean value) { boolean old = value; this.value = value;return old; } public synchronized void swap(HardwareData other) { boolean old = value; value = other.value; other.value = value;} } Usage: HardwareData hdware = new HardwareData(false); // Threads share HardwareData hdware2 = new HardwareData(true); Either: while (hdware.getAndSet(true)) Thread.yield(); Or: hdware.swap(hdware2); while (hdware2.lock() == true) ; // Wait - Spin criticalSection(); hdware.release(false); someOtherCode();

8 Operating System Synchronization Uniprocessors – Disable interrupts –Currently running code would execute without preemption –The executing code must be extremely quick –Not scalable to multiprocessor systems, where each processor has its own interrupt vector Multiprocessor - atomic hardware instructions –Spin locks are used; must be extremely quick –Locks that require significant processing use other mechanisms: preemptible semaphores

9 Semaphores Synchronization tool that uses blocks A Semaphore is an abstract data type –Contains an integer variable –Atomic acquire method (P – proberin) –Atomic release method (V – verhogen) Includes an implied queue (or some randomly accessed data structure) acquire() { value--; if (value<0) { add P to list block } release() { value++; if (value <=0) { remove P from list wakeup(P) } }

10 Semaphore for General Synchronization Semaphore S = new Semaphore(n); sem.acquire() // critical section code sem.release() Counting semaphore –the integer value has any range –can count available resources Binary semaphore –integer value: 0 or 1 –Also known as mutex locks Semaphores are error prone –acquire instead of release –forget to acquire or release Advantages and disadvantages?

11 Java Semaphores (Java 1.5) public class Worker implements Runnable { private Semaphore sem; public Worker(Semaphore sem) { this.sem = sem; } public void run() { while (true) { sem.acquire(); doSomething(); sem.release(); doMore(); } } public class Factory { public static void main(String[] args) { Semaphore sem = new Semaphore(1); Thread[] bees = new Thread[5]; for (int i=0; i<5; i++) bees[i] = new Worker(sem); for (int i=0; i<5; i++) bees[i].start();}

12 Semaphore Implementation It is possible that a semaphore needs to block while holding a mutex –Issue a wait() call to release the mutex –Another process must wake up the waiting thread (notify() or notifyAll()). Otherwise the thread will never execute. Good design will minimize the time spent in critical sections

13 Deadlock and Starvation Deadlock – two or more processes wait for a resource that can never be satisfied. Let S and Q be two semaphores initialized to 1 P0 P1 P0 P1 S.acquire(); Q.acquire(); Q.acquire(); S.acquire(); // Critical Section // Critical Section S.release(); Q.release(); Q.release(); S.release(); Starvation: Continual preemption (indefinite blocking) –Order of servicing the blocking list (LIFO for example) –Process priorities prevent a process from exiting a semaphore queue

14 The Bounded Buffer problem One or more producers add elements to a buffer One or more consumers extract produced elements from the buffer for service There are N of buffers (hence a bounded number) Solution with semaphores –Three semaphores, mutex, full, and empty –Initialize mutex to 1, full to 0 and empty to N –The full semaphore counts up, and the empty semaphore counts down

15 The Bounded Buffer Solution public class BoundedBufferExample { public static void main(String args[]) { BoundedBuffer buffer = new BoundedBuffer(); new Producer(buffer).start(); new Consumer(buffer).start(); } } public class Producer extends Thread { private BoundedBuffer buf; public Producer(Buffer buf) {this.buf=buf;} public void run() { while (true) { sleep(100); buffer.insert(new Date()); } } } public class Consumer extends Thread { private BoundedBuffer buf; public Consumer(Buffer buf) {this.buf=buffer;} public void run() { while (true) { sleep(100); System.out.println( (Date)buffer.remove());}}}

16 Bounded Buffer Semaphore Solution public class BoundedBuffer { private static final int SIZE = 5; private Object[] buffer; private int in, out; private Semaphore mutex, empty, full; public BoundedBuffer() { in = out = 0; buffer = new Object[SIZE]; mutex = new Semaphore(1); empty = new Semaphore(SIZE); full = new Semaphore(0); } public void insert(Object item) { empty.acquire(); mutex.acquire(); buffer[in] = item; in = (in+1)%SIZE; mutex.release(); full.release(); } public Object remove() { full.acquire(); mutex.acquire(); Object item = buffer[out]; out = (out+1)%SIZE; mutex.release(); empty.release(); return item; } Demonstrates binary and Counting semaphores

17 The Readers-Writers Problem Data is shared among concurrent processes –Readers: only read the data set, without any updates –Writers: May both read and write. Problem –Multiple readers can read at concurrently –Only one writer can concurrently access shared data Shared Data –Data set –Semaphore mutex initialized to 1. –Semaphore db initialized to 1 // Acquired by the first reader –Integer readerCount initialized to 0 counting upwards. Note: The best solution is to disallow more readers while a writer waits. Otherwise, starvation is possible.

18 Reader Writers with Semaphores class DataBase { private int readers; private Semaphore mutex, db; public DataBase() { readers = 0; mutex = new Semaphore(1); db = new Semaphore(1); } public void acquireRead() // First reader acquired db { mutex.acquire(); if (++readers==1) db.acquire(); mutex.release(); } public void releaseRead() // Last reader released db { mutex.acquire(); if (--readers==0) db.release(); mutex.release(); } public void acquireWrite() {db.acquire();} public void releaseWrite() {db.release();} } How would we disallow more readers after a write request?

19 Reader Writer User Classes class Reader extends Thread { private DataBase db; public Reader(DataBase db) {this.db = db;} public void run() { while(true) { sleep(500);db.acquireRead(); doRead(); db.releaseRead();} } } class Writer extends Thread { private Locks db; public Writer(DataBase db) {this.db = db;} public void run() { while (true) { sleep(500); db.acquireWrite(); doWrite(); db.releaseWrite();} } Hints for the lab project 1.make an array of DataBase objects 2.add throws InterruptedException to the methods 3.Randomly choose the sleep length

20 Condition Variables Condition x; Two operations on a condition x.wait () –Block the process invoking this operation –Expects a subsequent signal call by another process x.signal () –Wake up a process blocked because of wait() –If no processes are blocked, ignore –Which process wakes up? Answer: indeterminate An object with wait and signal capability Note: wait and signal normally execute after some condition occurs

21 Monitors Monitor without condition variablesMonitor with condition variables A high-level abstraction integrated into the syntax of the language Only one process may be active within the monitor at a time

22 Generic Monitor Syntax

23 Java Monitors Every object has a single monitor lock A call to a synchronized method –Lock Available: acquires the lock –Lock Not available: block and wait in the entry set Non synchronized methods ignore the lock Lock released when a synchronized method returns The entry queue algorithm varies (normally FCFS) Recursive locking occurs if a method with the lock calls another synchronized method in the object. This is legal.

24 wait(): block the process and move to the wait set. notify() –An arbitrary thread from the wait set moves to the entry set. –A thread not waiting for that condition reissues a wait() notifyAll() –All threads in the wait set move to the entry set. –Threads not waiting for that condition call wait again Notes: –notify() & notifyAll() are ignored if the wait() set is empty –wait() & notify() are single condition variables per Java monitors –Java 1.5 adds additional condition variable support –Calls to wait() and notify() are legal only when lock is owned Java Synchronization

25 Block Synchronization Scope –time between acquire and release –synchronizing blocks of code in a method can reduce the scope How to do it –Instantiate a lock object –Use the synchronized keyword around a block –Use wait() and notify() calls as needed Acquiring an object's lock from outside a synchronized method Example Object lock=new Object(); synchronized(lock) { // somecritical code. lock.wait(); // more criticalcode lock.notifyAll(); } Question: What's wrong with: synchronized(new Object()) {// code here}

26 New Java Concurrency Features Problem in old versions of Java –notify(), notifyAll(), and wait() are a single condition variable for an entire class. –This is not sufficient for every application. Java 1.5 solution: condition variables Lock key = new ReentrantLock(); Condition condVar = key.newCondition(); Once created, a thread can use the await() and signal() methods.

27 Dining-Philosophers Problem Shared data –Bowl of rice (data set) –Semaphore chopStick [5] initialized to 1 Philosopher i loop while (true) { sleep((int)Math.random()*TIME); chopStick[i].acquire(); chopStick[(i+1)%5].acquire(); eat(); chopStick[i].release(); chopStick[(i+1)%5].release; think(); }

28 Dining Philosophers (Condition variables) class DiningPhilosophers { enum State {THINK, HUNGRY, EAT}; Condition[] self = new Condition[5]; State[] state = new State[5]; public DiningPhilosophers {Lock lock = new ReentrantLock(); for (int i=0;i<5;i++) { state[i]=State.THINK; Condition[i]=lock.newCondition(); }} public void takeForks(int i) { state[i] = state.HUNGRY; test(i) while (state[i] != State.EAT) self[i].await(); } public void returnFork(int i) {state[i] = State.THINK; test((i+1)%5); test((i+4)%5); } private void test(int i) { if((state[(i+4)%5]!=State.EAT)&&(state[i]==State.HUNGRY) && (state[(i+1)%5]!=State.EAT)) { state[i] = State.EAT; self[i].signal(); } }}

29 Dining Philosophers with Java Monitor class Chopsticks { boolean[] stick = new boolean[5]; public synchronized Chopsticks() { for (int i=0;i<5;i++) stick[i]=false; } public synchronized void pickUp(int i) { while (st[i]) wait(); stick[i]=true; while (stick[(i+1)%5]) wait(); stick[(i+1])%5]=true; } public synchronized void putDown(int i) { stick[i] = false; stick[(i+1)%5] = false; notifyAll(); } } Void run() { String me = thread.currentThread.getName(); int stick = Integer.parseInt(me); while (true) { chopStick.pickUp(stick); eat(); chopStick.putDown(stick); think(); } Note: All philosopher threads must share the Chopsticks object.

30 Bounded Buffer - Java public synchronized void insert(Object item) { while (count==SIZE) Thread.yield(); buffer[in]=item; in=(in+1)%SIZE; ++count; } public synchronized Object remove() { while (count==0) Thread.yield(); Object item = buffer[out]; out = (out+1)%SIZE; --count; return item; } What bugs are here? Synchronized insert() and remove() methods

31 Java Bounded Buffer with Monitors public class BoundedBuffer { private int count, in, out; private Object buf; public BoundedBuffer(int size) { count = in = out = 0; buf = new Object[size]; } public synchronized void insert(Object item) throws InterruptedException { while (count==buffer.length) wait(); buf[in] = item; in = (in + 1)%buf.length; ++count; notifyAll(); } public synchronized Object remove() throws InterruptedException { while (count==0) wait(); Object item = buf[out]; out=(out+1)%buf.length; -- count; notifyAll(); return item;} }

32 Java Readers-Writers with Monitors public class Database { private int readers; private boolean writers; public Database() {readers=0; writers=false; } public synchronized void acquireRead() throws InterruptedException { while (writers) wait(); ++readers; } public synchronized void releaseRead() { if (--readers==0) notify(); } public synchronized void acquireWrite() throws InterruptedException { while (readers>0||writers) wait(); writers=true;} public synchronized void releaseWrite() { writers=false; notifyAll(); } }

33 Solaris Synchronization A variety of locks are available Adaptive mutexes –Spin while waiting for lock if the holding task is executing –Block if the holding task is blocked –Note: Always block on a single processor machines because only one task can actually execute. condition variables and readers-writers locks –Threads block if locks are not available Turnstiles –Threads block on a FCFS queue as locks awarded

34 Windows XP and Linux Kernel locks –Single processor Windows: Interrupt masks allow high priority interrupts Linux: Disables interrupts for short critical sections –Multiprocessor: Spin locks User locks –Windows: Dispatcher object handles; callback mechanism –Linux: semaphores and spin locks P-threads: OS independent API –Standard: mutex locks and condition variables –Non universal extensions: reader-writer, and spin locks Note: A Windows dispatcher object is a low-level synchronization module that widows uses to control user locks, semaphores, callback events, timers, and inter-process communication.

35 Transactions Requirements –Perform in its entirety, or not at all –Assure atomicity –Stable Storage: Operate during failures (typically RAID – Redundant Array of Inexpensive disks) Transactions consist of: –A series of read and write operations –A commit operation completes the transaction –An abort operation cancels (rolls back) the any changes performed A collection of operations performed as an atomic unit Stable storage: a group of disks that mirror every operation

36 Write-ahead logging The Log –Transaction name, item name, old & new values –Writes to stable storage before data operations Operations – when transaction T i starts – when T i commits Recovery methods (Undo(T i ), and Redo(T i ) ) –restore or set values of all data affected by T i –must be idempotent (same results if repeated) System uses the log to restore state after failures –log: without : undo(T i ) –log: and, redo(T i )

37 Checkpoints Purpose: shorten long recoveries Checkpoint scheme: –flush records periodically to stable storage. –Output a record to the log Recovery –Only consider transactions, T i, that started before the most recent checkpoint but hasn’t committed, or transactions starting after T i –All other transactions already on stable storage

38 Serial Schedule Transactions require multiple reads and writes If T 0 & T 1 are transactions –T 0,T 1, & T 1,T 0 are serial schedules T 0, T 1, & T 2 are transactions –T 0,T 1,T 2,, T 0,T 2,T 1, T 1,T 0,T 2, T 1,T 2,T 0, T 2,T 0,T 1, T 2,T 1,T 0, are serial schedules A serial schedule is a possible atomic order of execution Note: N transactions (T i ) imply N! serial schedules Concurrency-control algorithms: those that ensure a serial schedule Naïve approach: Single mutex controlling all transactions. However, this is inefficient because transactions seldom access the same data

39 Non-serial Schedule A non-serial schedule allows reads and writes from one transaction occur concurrently with those from another It's a Conflict if transactions access the same data item with at least one write. If an equivalent serial schedule exists, to one where reads/writes overlap; it is then conflict serializable The above operations are conflict serializable

40 Pessimistic Locking Protocol Transactions acquire reader/writer locks in advance. If a lock is already held, the transaction must block Two-phase protocol 1.Each transaction issues lock and unlock requests in two phases 2.During the growing phase, a transaction can obtain locks. It cannot release any. 3.During the release phase, a transaction can release locks in a shrinking phase. It cannot obtain any Problems –Deadlock can occur –Locks typically get held for too long Note: Reader/writer locks are called shared/exclusive locks in transaction processing. They apply to particular data items.

41 Optimistic Locking Protocol Goal: Not to prevent, but detect conflicts Mechanism: Numerically time stamp each transaction with an increasing transaction number written to items when reading or writing Conflict occurs when –We read a transaction that was written by a transaction with a future time stamp –If we are about to write a transaction read or wrtiten by a future transaction with a previous timestamp Action: Roll back, get another timestamp, try again Advantage: minimizes lock time and is deadlock free Disadvantage: many roll backs if lots of contention

42 Time Stamp Implementation Each data item has two time stamps: The largest successful write and the largest successful read Read a data record –If T i timestamp < item write timestamp, we cannot read a value written in the future. A roll back is necessary –If T i timestamp > item write timestamp, perform read and update the item read timestamp Write a data record –If Ti < item read timestamp, we cannot write over an item read in the future. A roll back is necessary –IF Ti < data write timestamp, we cannot write over a record written in the future. A roll back is necessary –Otherwise, the write is successful Implementation Issues: Time stamp assignments must be atomic. Each read and write must be atomic

43 Schedule Possible Under Timestamp Protocol


Download ppt "Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong."

Similar presentations


Ads by Google