Presentation is loading. Please wait.

Presentation is loading. Please wait.

CENG 334 – Operating Systems 03- Threads & Synchronization Asst. Prof. Yusuf Sahillioğlu Computer Eng. Dept,, Turkey.

Similar presentations


Presentation on theme: "CENG 334 – Operating Systems 03- Threads & Synchronization Asst. Prof. Yusuf Sahillioğlu Computer Eng. Dept,, Turkey."— Presentation transcript:

1 CENG 334 – Operating Systems 03- Threads & Synchronization Asst. Prof. Yusuf Sahillioğlu Computer Eng. Dept,, Turkey

2 Threads 2 / 102 Threads. Parallel work on the same process for efficiency. Several activities going on as part of the same process. Share registers, memory, and other resources. All about data synchronization:

3 Threads 3 / 102 Processes vs. Threads One thread listens to connections; others handle page requests. One thread handles GUI; other computations. One thread paints the left part, other the right part...

4 Thread State 4 / 102 State shared by all threads in process: Memory content (global variables, heap, code, etc). I/O (files, network connections, etc). A change in the global variable will be seen by all other threads (unlike processes). State private to each thread: Kept in TCB (Thread Control Block). CPU registers, program counter. Stack (what functions it is calling, parameters, local variables, return addresses). Pointer to enclosing process (PCB).

5 Thread Behavior 5 / 102 Single threaded main() computePI(); //never finish printf(“hi”); //never reach here A process has a single thread of control: if it blocks on something nothing else can be done. Multi-threaded main() createThread( computePI() ); //never finish createThread( printf(“hi”) ); //reaches here main() createThread( scanf() ); //not finish ‘till user enters (not in CPU) createThread( autoSaveDoc() ); //reaches here while waiting on I/O

6 Thread Behavior 6 / 102 Execution flow:

7 Threads on a Single CPU 7 / 102 Still possible. Multitasking idea Share one CPU among many processes (context switch). Multithreading idea Share the same process among many threads (thread switch). Whenever this process has the opportunity to run in the CPU, OS can select one of its many threads to run it for a while, and so on. One pid, several thread ids. Schedulable entities increased.

8 Threads on a Single CPU 8 / 102 If threads are all CPU-bound, e.g., no I/O or pure math, then we do not gain much by multithreading. Luckily this is usually not the case, e.g., 1 thread does the I/O,.. Select your threads carefully, one is I/O-bound, other is CPU-bound,.. With multicores, we still gain big even if threads are all CPU-bound.

9 Multithreading Concept 9 / 102 Multithreading concept: pseudo-parallel runs. (pseudo: interleaving switches on 1 CPU). funct1() {.. } funct2() {.. } main() {.. createThread( funct1() );.. createThread( funct2() );.. createThread( funct1() );.. } thread1 thread2thread3thread4

10 Single- vs. Multi-threaded Processes 10 / 102 Shared and private stuff:

11 Benefits of Threads 11 / 102 Responsiveness One thread blocks, another runs. One thread may always wait for the user. Resources sharing Very easy sharing (use global variables; unlike msg queues, pipes, shmget). Be careful about data synchronization tough. Economy Thread creation is fast. Context switching among threads may be faster. ‘cos you do not have to duplicate code and global variables (unlike processes). Scalability Multiprocessoers can be utilized better. Process that has created 4 threads can use all 4 cores (single-threaded proc. utilize 1 core).

12 Fun Fact 12 / 102 Why the hell we call it a thread anyway? Execution flow of a program is not smooth, looks like a thread. Execution jumps around all over the place (switches) but integrity is intact.

13 Multithreading Example: WWW 13 / 102 Client (Chrome) requests a page from server (amazon.com). Server gives the page name to the thread and resumes listening. Thread checks the disk cache in memo; if page not there, do disk I/O; sends the page to the client.

14 Threading Support 14 / 102 Thread libraries that provide us API for creating and managing threads. pthreads, java threads, win32 threads. Pthreads. Common in Unix operating sytems: Solaris, Mac OS, Linux. No implemented in the standard C library; search the library named pthread while compiling: gcc –o thread1 –lpthread thread1.c Functions in pthread library are actually doing linux system calls, e.g., pthread_create()  clone()

15 Pthreads 15 / 102 int main(..) {.. pthread_create(&tid,…,runner,..); pthread_join(tid); printf (sum); } runner (..) {.. sum =.. pthread_exit(); } thread1 thread2 wait

16 Single- to Multi-thread Conversion 16 / 102 In a simple world Identify functions as parallel activities. Run them as separate threads. In real world Single-threaded programs use global variables, library functions (malloc). Be careful with them. Global variables are good for easy-communication but need special care.

17 Single- to Multi-thread Conversion 17 / 102 Careful with global variable:

18 Single- to Multi-thread Conversion 18 / 102 Careful with global variable:

19 Single- to Multi-thread Conversion 19 / 102 Global, local, and thread-specific variables. thread-specific: global inside the thread, but not for the whole process, i.e., other threads cannot access it, but all the functions of the thread can (no problem ‘cos fnctns within a thread executed sequentially). No language support for this variable type; C cannot do this. Thread API has special functions to create such variables.

20 Single- to Multi-thread Conversion 20 / 102 Use thread-safe (reentrant, reenterable) library routines. Multiple malloc()s are executed sequentially in a single-threaded code. Say one thread is suspended on malloc(); another process calls malloc() and re-enters it while the 1 st one has not finished. Library functions should be designed to be reentrant = designed to have a second call to itself from the same process before it’s finished. To do so, do not use global variables.

21 Synchronization 21 / 102 Synchronize threads/coordinate their activities so that when you access the shared data (e.g., global variables) you are not having a trouble. Multiple processes sharing a file or shared memory segment also require synchronization (= critical section handling).

22 Synchronization 22 / 102 The part of the process that is accessing and changing shared data is called its critical section. Change X Change Y Change X Process 1 Code Process 2 Code Process 3 Code Assuming X and Y are shared data.

23 Synchronization 23 / 102 Solution: No 2 processes/threads are in their critical section at the same time, aka Mutual Exclusion (mutex). Must assume processes/threads interleave executions arbitrarily (preemptive scheduling) and at different rates. Scheduling is not under application’s control. We control coordination using data synchronization. We restrict interleaving of executions to ensure consistency. Low-level mechanism to do this: locks, High-level mechanisms: mutexes, semaphores, monitors, condition variables.

24 Synchronization 24 / 102 General way to achieve synchronization:

25 Synchronization 25 / 102 An example: race condition. Critical section: Critical section: critical section respected not respected 

26 Synchronization 26 / 102 Another example: race condition. Assume we had 5 items in the buffer. Then Assume producer just produced a new item, put it into buffer, and about to do count++ Assume consumer just retrieved an item from the buffer, and about to do count-- or ProducerConsumer ProducerConsumer

27 Synchronization 27 / 102 Another example: race condition. Critical region: is where we manipulate count. count++ could be implemented as (similarly, count--) register1 = count; //read value register1 += 1; //increase value count = register1; //write back

28 Synchronization 28 / 102 Then: register2 = count register2 = register2 – 1 count = register2 register1 = count register1 = register1 + 1 count = register1 Count register1 register2 5 register1 = count register1 = register1 + 1 count = register1 register2 = count register2 = register2 – 1 count = register2 PRODUCER (count++) CONSUMER (count--) 56 54 64 CPU Main Memory

29 Synchronization 29 / 102 Another example: race condition. 2 threads executing their critical section codes  Although 2 customers withdrew 100TL, balance is 900TL, not 800TL  balance = get_balance(account); balance -= amount; balance = get_balance(account); balance -= amount; put_balance(account, balance); Execution sequence as seen by CPU Balance = 1000TL Balance = 900TL Balance = 900TL! Local = 900TL

30 Synchronization 30 / 102 Solution: mutual exclusion. Only one thread at a time can execute code in their critical section. All other threads are forced to wait on entry. When one thread leaves the critical section, another can enter. Critical Section Thread 1 (modify account balance)

31 Synchronization 31 / 102 Solution: mutual exclusion. Only one thread at a time can execute code in their critical section. All other threads are forced to wait on entry. When one thread leaves the critical section, another can enter. Thread 2 Critical Section Thread 1 (modify account balance) 2 nd thread must wait for critical section to clear

32 Synchronization 32 / 102 Solution: mutual exclusion. Only one thread at a time can execute code in their critical section. All other threads are forced to wait on entry. When one thread leaves the critical section, another can enter. 1 st thread leaves critical section2 nd thread free to enter Thread 2 Critical Section (modify account balance)

33 Synchronization 33 / 102 Solution: mutual exclusion. pthread library provides us mutex variables to control the critical section access. pthread_mutex_lock(&myMutex).. //critical section stuff pthread_mutex_unlock(&myMutex) See this in action..

34 Synchronization 34 / 102 Critical section requirements. Mutual exclusion: at most 1 thread is currently executing in the critical section. Progress: if thread T1 is outside the critical section, then T1 cannot prevent T2 from entering the critical section. No starvation: if T1 is waiting for the critical section, it’ll eventually enter. Assuming threads eventually leave critical sections. Performance: the overhead of entering/exiting critical section is small w.r.t. the work being done within it.

35 Synchronization 35 / 102 Solution: Peterson’s solution to mutual exclusion. Programming at the application (sw solution; no hw or kernel support). Peterson.enter //similar to pthread_mutex_lock(&myMutex).. //critical section stuff Peterson.exit //similar to pthread_mutex_unlock(&myMutex) Works for 2 threads/processes (not more). Is this solution OK? Set global variable lock = 1. A thread that wants to enter critical section checks lock == 1. If true, enter. Do lock--. if false, another thread decremented it so not enter.

36 Synchronization 36 / 102 Solution: Peterson’s solution to mutual exclusion. Programming at the application (sw solution; no hw or kernel support). Peterson.enter //similar to pthread_mutex_lock(&myMutex).. //critical section stuff Peterson.exit //similar to pthread_mutex_unlock(&myMutex) Works for 2 threads/processes (not more). Is this solution OK? Set global variable lock = 1. A thread that wants to enter critical section checks lock == 1. If true, enter. Do lock--. if false, another thread decremented it so not enter. This solution sucks ‘cos lock itself is a shared global variable. Just using a single variable without any other protection is not enough. Back to Peterson’s algo..

37 Synchronization 37 / 102 Solution: Peterson’s solution to mutual exclusion. Programming at the application (sw solution; no hw or kernel support). Peterson.enter //similar to pthread_mutex_lock(&myMutex).. //critical section stuff Peterson.exit //similar to pthread_mutex_unlock(&myMutex) Works for 2 threads/processes (not more). Assume that the LOAD and STORE machine instructions are atomic; that is, cannot be interrupted. The two processes share two variables: int turn; boolean flag[2]; The variable turn indicates whose turn it is to enter the critical section. turn = i means process Pi can execute (i=0,1). The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready (wants to enter).

38 Synchronization 38 / 102 Solution: Peterson’s solution to mutual exclusion. The variable turn indicates whose turn it is to enter the critical section. turn = i means process Pi can execute (i=0,1). The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready (wants to enter). Algorithm for Pi; the other process is Pj: I want to enter but, be nice to other process. Busy wait:

39 Synchronization 39 / 102 Solution: Peterson’s solution to mutual exclusion. do { flag[i] = TRUE; turn = j; while (flag[j] && turn == j); critical section.. flag[i] = FALSE; remainder section.. } while (1) do { flag[j] = TRUE; turn = i; while (flag[i] && turn == i); critical section.. flag[j] = FALSE; remainder section.. } while (1) PROCESS i (0)PROCESS j (1) Shared Variables: flag[] turn i=0, j=1 are local.

40 Synchronization 40 / 102 Solution: hardware support for mutual exclusion. Kernel code can disable clock interrupts (context/thread switches). disable interrupts (no switch) enable interrupts (schedulable)

41 Synchronization 41 / 102 Solution: hardware support for mutual exclusion. Works for single CPU. Multi-CPU fails ‘cos you’re disablin the interrupt only for your processor. That does not mean other processors do not get interrupts. Each processor has its own interrupt mechanism. Hence another process/thread running in another processor can touch the shared data. Too inefficient to disable interrupts on all available processors.

42 Synchronization 42 / 102 Solution: hardware support for mutual exclusion. Another support mechanism: Complex machine instructions from hw that are atomic (not interruptible). Locks (not just simple integers). How to implement acquire/release lock? Use special machine instructions: TestAndSet, Swap. do { acquire lock critical section release lock remainder section } while (TRUE);

43 Synchronization 43 / 102 Solution: hardware support for mutual exclusion. Complex machine instruction for hw synch: TestAndSet TestAndSet is a machine/assembly instruction. You must write the acquire-lock portion (entry section code) of your code in assembly. But here is a C code for easy understanding: boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: } //atomic (not interruptible)!!!!!!!!!!!! --Definition of TestAndSet Instruction--

44 Synchronization 44 / 102 Solution: hardware support for mutual exclusion. Complex machine instruction for hw synch: TestAndSet do { while ( TestAndSet (&lock ) ) ; //do nothing; busy wait // critical section lock = FALSE; //release lock // remainder section } while (TRUE); entry section exit section We use a shared Boolean variable lock, initialized to false.

45 Synchronization 45 / 102 Solution: hardware support for mutual exclusion. Complex machine instruction for hw synch: TestAndSet Can be suspended/interrupted b/w TestAndSet & CMP, but not during TestAndSet.

46 Synchronization 46 / 102 Advertisement: Writing assembly in C is a piece of cake.

47 Synchronization 47 / 102 Solution: hardware support for mutual exclusion. Complex machine instruction for hw synch: Swap Swap is a machine/assembly instruction. You must write the acquire-lock portion (entry section code) of your code in assembly. But here is a C code for easy understanding: boolean Swap (boolean* a, boolean* b) { boolean temp = *a; *a = *b; *b = temp; } //atomic (not interruptible)!!!!!!!!!!!! --Definition of Swap Instruction--

48 Synchronization 48 / 102 Solution: hardware support for mutual exclusion. Complex machine instruction for hw synch: Swap We use a shared Boolean variable lock, initialized to false. Each process also has a local Boolean variable key. do { key = TRUE; while (key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section } while (TRUE); entry sect exit sect

49 Synchronization 49 / 102 Solution: hardware support for mutual exclusion. A comment on TestAndSwap & Swap. Although they both guarantee mutual exclusion, they may make one process (X) wait a lot: A process X may be waiting, but we can have the other process Y going into the critical region repeatedly. One toy/bad solution: keep the remainder section code so long that scheduler kicks Y out of the CPU before it reaches back to the entry section.

50 Synchronization 50 / 102 Solution: Semaphores (= shared integer variable). Idea: avoid busy waiting: waste of CPU cycles by waiting in a loop ‘till the lock is available, aka spinlock. Example1: while (flag[i] && turn == i); //from Peterson’s algo. Example2: while (TestAndSet (&lock )); //from TestAndSet algo. How to avoid? If a process P calls wait() on a semaphore with a value of zero, P is added to the semaphore’s queue and then blocked. The state of P is switched to the waiting state, and control is transferred to the CPU scheduler, which selects another process to execute (instead of busy waiting on P). When another process increments the semaphore by calling signal() and there are tasks on the queue, one is taken off of it and resumed. wait() = P() = down(). //modify semaphore s via these functions. signal() = V() = up(). //modify semaphore s via these functions.

51 Synchronization 51 / 102 Solution: Semaphores. wait() = P() = down(). //modify semaphore s via these functions. signal() = V() = up(). //modify semaphore s via these functions. These functions can be implemented in kernel as system calls. Kernel makes sure that wait(s) & signal(s) are atomic. Less complicated entry & exit sections.

52 Synchronization 52 / 102 Solution: Semaphores. Operations (kernel codes). Busy-waiting  vs. Efficient More formally, s->value--; s->list.add(this); etc.

53 Synchronization 53 / 102 Solution: Semaphores. Operations. wait(s): if s positive s-- and return else block/wait (‘till somebody wakes you up; then return)

54 Synchronization 54 / 102 Solution: Semaphores. Operations. signal(s): if there’s 1+ process waiting (new s<=0) wake one of them up and return //wake = change state from waiting to ready else s++ and return

55 Synchronization 55 / 102 Solution: Semaphores. Types. Binary semaphore Integer value can range only between 0 and 1; can be simpler to implement; aka mutex locks. Provides mutual exclusion; can be used for the critical section problem. Counting semaphore Integer value can range over an unrestricted domain. Can be used for other synchronization problems; for example for resource allocation. Example: you have 10 instances of a resource. Init semaphore s to 10 in this case.

56 Synchronization 56 / 102 Solution: Semaphores. Usage. An integer variable s that can be shared by N processes/threads. s can be modified only by atomic system calls: wait() & signal(). s has a queue of waiting processes/threads that might be sleeping on it. Atomic: when process X is executing wait(), Y can execute wait() if X finished executing wait() or X is blocked in wait(). When X is executing signal(), Y can execute signal() if X finished. typedef struct { int value; struct process *list; } semaphore;

57 Synchronization 57 / 102 Solution: Semaphores. Usage. Binary semaphores (mutexes) can be used to solve critical section problems. A semaphore variable (lets say mutex) can be shared by N processes, and initialized to 1. Each process is structured as follows: do { wait (mutex); // Critical Section signal (mutex); // remainder section } while (TRUE);

58 Synchronization 58 / 102 Solution: Semaphores. Usage. do { wait (mutex); // Critical Section signal (mutex); // remainder section } while (TRUE); do { wait (mutex); // Critical Section signal (mutex); // remainder section } while (TRUE); Semaphore mutex; //initialized to 1 Kernel Process 0Process 1 wait() {…} signal() {…}

59 Synchronization 59 / 102 Solution: Semaphores. Usage. Kernel puts processes/threads waiting on s in a FIFO queue. Why FIFO?

60 Synchronization 60 / 102 Solution: Semaphores. Usage other than critical section. Ensure S1 definitely executes before S2 (just a synch problem). … S1; …. … S2; …. P0P1

61 Synchronization 61 / 102 Solution: Semaphores. Usage other than critical section. Ensure S1 definitely executes before S2 (just a synch problem). … S1; …. … S2; …. P0P1 … S1; signal (x); …. … wait (x); S2; …. P0P1 Solution via semaphores: Semaphore x = 0; //inited to 0

62 Synchronization 62 / 102 Solution: Semaphores. Usage other than critical section. Resource allocation (just another synch problem). We have N processes that want a resource that has 5 instances. Solution:

63 Synchronization 63 / 102 Solution: Semaphores. Usage other than critical section. Resource allocation (just another synch problem). We’ve N processes that want a resource R that has 5 instances. Solution: Semaphore rs = 5; Every process that wants to use R will do wait(rs); If some instance is available, that means rs will be nonnegative  no blocking. If all 5 instances are used, that means rs will be negative  block ‘till rs nonneg. Every process that finishes with R will do signal(rs); A blocked processes will change state from waiting to ready.

64 Synchronization 64 / 102 Solution: Semaphores. Usage other than critical section. Enforce consumer to sleep while there’s no item in the buffer (another synch problem). do { // produce item.. put item into buffer.. signal (Full_Cells); } while (TRUE); do { wait (Full_Cells); //instead of busy-waiting, go to sleep mode and give CPU back to producer for faster production (efficiency!!)... remove item from buffer.. } while (TRUE); Semaphore Full_Cells = 0; //initialized to 0 Kernel ProducerConsumer

65 Synchronization 65 / 102 Solution: Semaphores.

66 Synchronization 66 / 102 Solution: Semaphores. Consumer can never cross the producer curve. Difference b/w produced and consumed items can be <= BUFSIZE

67 Synchronization 67 / 102 Problems with seampahores: Deadlock and Starvation. Deadlock. Two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.

68 Synchronization 68 / 102 Problems with seampahores: Deadlock and Starvation. Deadlock. Two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.

69 Synchronization 69 / 102 Problems with seampahores: Deadlock and Starvation. Deadlock. Two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.

70 Synchronization 70 / 102 Problems with seampahores: Deadlock and Starvation. Deadlock. This code may sometimes (not all the time) cause a deadlock: P0 P1 wait(S); wait(Q); wait(Q); wait(S);. signal(S); signal(Q); signal(Q); signal(S); When does the deadlock occur? How to solve?

71 Synchronization 71 / 102 Problems with seampahores: Deadlock and Starvation. Starvation. Indefinite blocking: a process may never be removed from the semaphore queue in which it is susupended; it’ll always be sleeping; no service. When does it occur? How to solve? Another problem: Low-priority process may cause high-priority process to wait.

72 Synchronization 72 / 102 Classic Synchronization Problems to be solved with Semaphores. Bounded-buffer problem. Readers-Writers problem. Dining philosophers problem.

73 Synchronization 73 / 102 Classic Synchronization Problems to be solved with Semaphores. Bounded-buffer problem (aka Producer-Consumer problem). Producer should not produce any item if the buffer is full: Semaphore full = 0; //inited Consumer should not consume any item if the buffer is empty: Semaphore empty = N; Producer and consumer should access the buffer in a mutually exc manner: mutex = 1; Types of 3 semaphores above? full = 4 empty = 6 buffer prodcons

74 Synchronization 74 / 102 Classic Synchronization Problems to be solved with Semaphores. Bounded-buffer problem. Producer should not produce any item if the buffer is full: Semaphore full = 0; //inited Consumer should not consume any item if the buffer is empty: Semaphore empty = N; Producer and consumer should access the buffer in a mutually exc manner: mutex = 1; Think about the code of this?

75 Synchronization 75 / 102 Classic Synchronization Problems to be solved with Semaphores. Bounded-buffer problem. Producer should not produce any item if the buffer is full: Semaphore full = 0; //inited Consumer should not consume any item if the buffer is empty: Semaphore empty = N; Producer and consumer should access the buffer in a mutually exc manner: mutex = 1;

76 Synchronization 76 / 102 Classic Synchronization Problems to be solved with Semaphores. Readers-Writers problem. A data set is shared among a number of concurrent processes. Readers: only read the data set; they do not perform any updates. Writers: can both read and write. Problem: allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time (no reader/writer when writer is active).

77 Synchronization 77 / 102 Classic Synchronization Problems to be solved with Semaphores. Readers-Writers problem. A data set is shared among a number of concurrent processes. Readers: only read the data set; they do not perform any updates. Writers: can both read and write. Problem: allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time (no reader/writer when writer is active). Integer readcount initialized to 0. Number of readers reading the data at the moment. Semaphore mutex initialized to 1. Protects the readcount variable (multiple readers may try to modify it). Semaphore wrt initialized to 1. Protects the shared data (either writer or reader(s) should access data at a time).

78 Synchronization 78 / 102 Classic Synchronization Problems to be solved with Semaphores. Readers-Writers problem. A data set is shared among a number of concurrent processes. Readers: only read the data set; they do not perform any updates. Writers: can both read and write. Problem: allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time (no reader/writer when writer is active). Think about the code of this? Reader and writer processes running in (pseudo) parallel. Hint: first and last reader should do something special.

79 Synchronization 79 / 102 Classic Synchronization Problems to be solved with Semaphores. Readers-Writers problem. A data set is shared among a number of concurrent processes. Readers: only read the data set; they do not perform any updates. Writers: can both read and write. //acquire lock to shared data. / /release lock of shared data.

80 Synchronization 80 / 102 Classic Synchronization Problems to be solved with Semaphores. Readers-Writers problem. Case1: First reader acquired the lock, reading, what happens if writer arrives? Case2: First reader acquired the lock, reading, what happens if reader2 arrvs? Case3: Writer acquired the lock, writing, what happens if reader1 arrives? Case4: Writer acquired the lock, writing, what happens if reader2 arrives?

81 Synchronization 81 / 102 Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbor cannot have it.

82 Synchronization 82 / 102 Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbor cannot have it.

83 Synchronization 83 / 102 Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbor cannot have it.

84 Synchronization 84 / 102 Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbor cannot have it.

85 Synchronization 85 / 102 Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbors cannot have it. Not gay, just going for a fork.

86 Synchronization 86 / 102 Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbors cannot have it. Philosopher in 2 states: eating (needs forks) and thinking (not need forks). We want parallelism, e.g., 4 or 5 (not 1 or 3) can be eating while 2 is eating. We don’t want deadlock: waiting for each other indefinitely. We don’t want starvation: no philosopher waits forever (starves to death).

87 Synchronization 87 / 102 Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbors cannot have it. A solution that provides concurrency but not deadlock prevention: Semaphore forks[5]; //inited to 1 (assume 5 philosophers on table). do { wait( forks[i] ); wait( forks[ (i + 1) % 5] ); // eat signal( forks[i] ); signal( forks[ (i + 1) % 5] ); // think } while(TRUE);

88 Synchronization 88 / 102 Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbors cannot have it. A solution that provides concurrency but not deadlock prevention: How is deadlock possible?

89 Synchronization 89 / 102 Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbors cannot have it. A solution that provides concurrency but not deadlock prevention: How is deadlock possible? Deadlock in a circular fashion: 5 gets the left fork, context switch (cs), 4 gets the left fork, cs,.., 1 gets the left fork, cs, 5 now wants the right fork which is held by 1 forever. Unlucky sequence of cs’s not likely but possible. A perfect solution w/o deadlock danger is possible with again semaphores. Solution #1: put the left back if you cannot grab right. Solution #2: grab both forks at once (atomic).

90 Synchronization 90 / 102 Problems with semaphores. Careless programmer may do signal(mutex);.. wait(mutex); //2+ threads in critical region (unprotected). wait(mutex);.. wait(mutex); //deadlock (indefinite waiting). Forgetting corresponding wait(mutex) or signal(mutex); //unprotect & deadlck Need something else, something better, something easier to use: Monitors.

91 Synchronization 91 / 102 Solution: Monitors. Idea: get help not from the OS but from the programming language. High-level abstraction for process/thread synchronization. C does not provide monitors (use semaphores) but Java does. Compiler ensures that the critical regions of your code are protected. You just identify the critical section of the code, put them into a monitor, and compiler puts the protection code. Monitor implementation using semaphores. Compiler writer/language developer has to worry about this stuff, not the casual application programmer.

92 Synchronization 92 / 102 Solution: Monitors. Monitor is a construct in the language, like class construct: monitor construct guarantees that only one process may be active within the monitor at a time. monitor monitor-name { // shared variable declarations procedure P1 (..) {.. }.. procedure Pn (..) {.. } Initialization code (..) {.. }.. }

93 Synchronization 93 / 102 Solution: Monitors. monitor construct guarantees that only one process may be active within the monitor at a time. This means that, if a process is running inside the monitor (= running a procedure, say P1()), then no other process can be active inside the monitor (= can run P1() or any other procedure of the monitor) at the same time. Compiler is putting some locks/semaphores to the beginning/ending of these critical regions (procedures, shared variables, etc.). So it is not the programmer’s job anymore to insert these locks/semaphores.

94 Synchronization 94 / 102 Solution: Monitors. Schematic view of a monitor. All other processes that want to be active in the monitor (execute a monitor procedure) must wait in the queue ‘till current active P leaves.

95 Synchronization 95 / 102 Solution: Monitors. Schematic view of a monitor. This monitor solution solves the critical section (mutual exc.) problem. But not the other synchronization problems such as produc-consume, dining philosophs.

96 Synchronization 96 / 102 Solution: Monitors. Condition variables to solve all the synchronization problems. In previous model, no way to enforce a process/thread to wait ‘till a condition happens. Now we can Using conition variables. condition x, y; Two operations on a condition variable: x.wait (): a process that invokes the operation is suspended. Execute wait() operation on the condition variable x. x.signal(): resumes one of processes (if any) that invoked x.wait(). Usually the first one that is blocked is waken up (FIFO).

97 Synchronization 97 / 102 Solution: Monitors. condition x, y; wait(Semaphore s); //you may or may not block depending on s.value x.wait () //you (= process) definitely block. No integer is attached to x (unlike s.value).

98 Synchronization 98 / 102 Solution: Monitors. Schematic view of a monitor w/ condition variables. If currently active process wants to wait (e.g., empty buffer), it calls x.wait() and added to the queue of x, and it is no longer active.

99 Synchronization 99 / 102 Solution: Monitors. Schematic view of a monitor w/ condition variables. New active process in the monitor (fetched from the entry queue), does x.signal() from a different/same procedure. Prev. process resumes from where it got blocked.

100 Synchronization 100 / 102 Solution: Monitors. Schematic view of a monitor w/ condition variables. Now we may have 2 processes active: caller of x.signal & waken-up. Solution: put x.signal() as the last statement in the procedure.

101 Synchronization 101 / 102 Solution: Monitors. Schematic view of a monitor w/ condition variables. Now we may have 2 processes active: caller of x.signal & waken-up. Solution: call x.wait() right after x.signal() to block the caller.

102 Synchronization 102 / 102 Solution: Monitors. An example: We have 5 instances of a resource and N processes. Only 5 processes can use the resource simultaneously. Process codeMonitor code

103 Synchronization 103 / 102 Solution: Monitors. An example: Dining philosophers. monitor DP { enum { THINKING, //not holding/wanting resoursces HUNGRY, //not holding but wanting EATING} //has the resources state[5]; condition cond[5]; //no need for entry/exit code to pickup() ‘cos its in monitor void pickup (int i) { state[i] = HUNGRY; test(i); if (state[i] != EATING) cond[i].wait; } void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5) test((i + 1) % 5); } void test (int i) { if ( (state[(i + 4) % 5] != EATING) && (state[(i + 1) % 5] != EATING) && (state[i] == HUNGRY)) { state[i] = EATING ; cond[i].signal (); } //initially all thinking initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; } } /* end of monitor */

104 Synchronization 104 / 102 Solution: Monitors. One philosopher/process doing this in an endless loop:.. DP DiningPhilosophers;.. while (1) { //THINK.. DiningPhilosophters.pickup (i); //EAT (use resources) DiningPhilosophers.putdown (i); //THINK.. } Philosopher i:

105 Synchronization 105 / 102 Solution: Monitors. First things first; what are the ID’s to access neighbors? Process i Process ??.. state[LEFT] = ?state[RIGHT] = ?state[i] = ? #define LEFT ? #define RIGHT ? THINKING? HUNGRY? EATING?

106 Synchronization 106 / 102 Solution: Monitors. General idea. Process i Process (i+1) % 5 Process (i+4) % 5 Test(i) …… Test((i+1) %5) Test((i+4) %5) state[LEFT] = ?state[RIGHT] = ?state[i] = ? #define LEFT (i+4)%5 #define RIGHT (i+1)%5 THINKING? HUNGRY? EATING?

107 Synchronization 107 / 102 Solution: Monitors. An example: Allocate a resource to one of the several processes. Priority-based: The process that will use the resource for the shortest amount of time (known) will get the resource first if there are other processes that want the resource. Resource.. Processes or Threads that want to use the resource

108 Synchronization 108 / 102 Solution: Monitors. An example: Allocate a resource to one of the several processes. Assume we have condition variable implementation that can enqueue sleeping/waiting processes w.r.t. a priority specified as a parameter to wait() call. condition x; x.wait (priority); 10204570 Queue of sleeping processes waiting on condition x: x priority could be the time-duration to use the resource.

109 Synchronization 109 / 102 Solution: Monitors. An example: Allocate a resource to one of the several processes. monitor ResourceAllocator { boolean busy; //true if resource is currently in use/allocated condition x; //sleep the process that cannot acquire the resource void acquire(int time) { if (busy) x.wait(time); busy = TRUE; } void release() { busy = FALSE; x.signal(); //wakeup the P at the head of the waiting qu } initialization_code() { busy = FALSE; } }

110 Synchronization 110 / 102 Solution: Monitors. An example: Allocate a resource to one of the several processes. ResourceAllocator RA; RA.acquire(10);..use resource.. RA.release(); ResourceAllocator RA; RA.acquire(30);..use resource.. RA.release(); Process/Thread 1 ResourceAllocator RA; RA.acquire(25);..use resource.. RA.release();.. Each process should use resource between acquire() and release() calls. Process/Thread 2Process/Thread N


Download ppt "CENG 334 – Operating Systems 03- Threads & Synchronization Asst. Prof. Yusuf Sahillioğlu Computer Eng. Dept,, Turkey."

Similar presentations


Ads by Google