Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:

Similar presentations


Presentation on theme: "1 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:"— Presentation transcript:

1 1 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL: http://kovan.ceng.metu.edu.tr/~erol/Courses/CENG334 Threads Topics Concurrent programming Threads Some of the following slides are adapted from Matt Welsh, Harvard Univ.

2 2 Concurrent Programming Many programs want to do many things “at once” Web browser: Download web pages, read cache files, accept user input,... Web server: Handle incoming connections from multiple clients at once Scientific programs: Process different parts of a data set on different CPUs In each case, would like to share memory across these activities Web browser: Share buffer for HTML page and inlined images Web server: Share memory cache of recently-accessed pages Scientific programs: Share memory of data set being processes Can't we simply do this with multiple processes?

3 3 Why processes are not always ideal... Processes are not very efficient Each process has its own PCB and OS resources Typically high overhead for each process: e.g., 1.7 KB per task_struct on Linux! Creating a new process is often very expensive Processes don't (directly) share memory Each process has its own address space Parallel and concurrent programs often want to directly manipulate the same memory e.g., When processing elements of a large array in parallel Note: Many OS's provide some form of inter-process shared memory cf., UNIX shmget() and shmat() system calls Still, this requires more programmer work and does not address the efficiency issues.

4 4 Can we do better? What can we share across all of these tasks? What is private to each task?

5 5 Can we do better? What can we share across all of these tasks? Same code – generally running the same or similar programs Same data Same privileges Same OS resources (files, sockets, etc.)‏ What is private to each task? Execution state: CPU registers, stack, and program counter Key idea of this lecture: Separate the concept of a process from a thread of control The process is the address space and OS resources Each thread has its own CPU execution state

6 6 Processes and Threads Each process has one or more threads “within” it Each thread has its own stack, CPU registers, etc. All threads within a process share the same address space and OS resources Threads share memory, so they can communicate directly! The thread is now the unit of CPU scheduling A process is just a “container” for its threads Each thread is bound to its containing process Thread 0 Thread 2 Thread 0 Thread 1 Address space

7 7 (Old) Process Address Space Stack Heap Initialized vars (data segment)‏ Code (text segment)‏ Address space 0x00000000 0xFFFFFFFF Stack pointer Program counter Uninitialized vars (BSS segment)‏ (Reserved for OS)‏

8 8 (New) Address Space with Threads Stack for thread 0 Heap Initialized vars (data segment)‏ Code (text segment)‏ Address space 0x00000000 0xFFFFFFFF PC for thread 1 Stack pointer for thread 0 Uninitialized vars (BSS segment)‏ Stack for thread 1 Stack for thread 2 Stack pointer for thread 1 Stack pointer for thread 2 PC for thread 0 PC for thread 2 All threads in a single process share the same address space! (Reserved for OS)‏

9 9 Implementing Threads Given what we know about processes, implementing threads is “easy” Idea: Break the PCB into two pieces: Thread-specific stuff: Processor state Process-specific stuff: Address space and OS resources (open files, etc.)‏ PC Registers Thread ID 4 State: Ready Net sockets User ID Group ID Addr space Open files PID 27682 PC Registers Thread ID 5 State: Ready TCB PCB

10 10 Thread Control Block (TCB)‏ TCB contains info on a single thread Just processor state and pointer to corresponding PCB PCB contains information on the containing process Address space and OS resources... but NO processor state! PC Registers Thread ID 4 State: Ready Net sockets User ID Group ID Addr space Open files PID 27682 PC Registers Thread ID 5 State: Ready TCB PCB

11 11 Thread Control Block (TCB)‏ TCB's are smaller and cheaper than processes Linux TCB (thread_struct) has 24 fields Linux PCB (task_struct) has 106 fields PC Registers Thread ID 4 State: Ready Net sockets User ID Group ID Addr space Open files PID 27682 PC Registers Thread ID 5 State: Ready TCB PCB

12 12 Context Switching TCB is now the unit of a context switch Ready queue, wait queues, etc. now contain pointers to TCB's Context switch causes CPU state to be copied to/from the TCB Context switch between two threads in the same process: No need to change address space Context switch between two threads in different processes: Must change address space, sometimes invalidating cache This will become relevant when we talk about virtual memory. PC Registers PID 4277, T0 State: Ready PC Registers PID 4391, T2 State: Ready Ready queue

13 13 User-Level Threads Early UNIX designs did not support threads at the kernel level OS only knew about processes with separate address spaces However, can still implement threads as a user-level library OS does not need to know anything about multiple threads in a process! How is this possible? Recall: All threads in a process share the same address space. So, managing multiple threads only requires switching the CPU state (PC, registers, etc.)‏ And this can be done directly by a user program without OS help!

14 14 Implementing User-Level Threads Alternative to kernel-level threads: Implement all thread functions as a user-level library e.g., libpthread.a OS thinks the process has a single thread Use the same PCB structure as in the last lecture OS need not know anything about multiple threads in a process! How to create a user-level thread? Thread library maintains a TCB for each thread in the application Just a linked list or some other data structure Allocate a separate stack for each thread (usually with malloc)‏

15 15 User-level thread address space Stack (for thread #1)‏ Heap Initialized vars (data segment)‏ Code (text segment)‏ Stack pointer for thread #1 PC for thread #1 Uninitialized vars (BSS segment)‏ (Reserved for OS)‏ PC for thread #2 Stack (for thread #2)‏ Stack pointer for thread #2 Original stack (provided by OS)‏ Additional thread stacks allocated by process Stacks must be allocated carefully and managed by the thread library.

16 16 User-level Context Switching How to switch between user-level threads? Need some way to swap CPU state. Fortunately, this does not require any privileged instructions! So, the threads library can use the same instructions as the OS to save or load the CPU state into the TCB. Why is it safe to let the user switch the CPU state?

17 17 setjmp() and longjmp() C standard library routines for saving and restoring processor state. int setjmp(jmp_buf env); Save current CPU state in the “jmp_buf” structure If the return is from a direct invocation, setjmp returns 0. If the return is from a call to longjmp, setjmp returns a nonzero value. void longjmp(jmp_buf env, int returnval); Restore CPU state from “jmp_buf” structure, causing corresponding setjmp() call to return with return value “returnval” The value specified by value is passed from longjmp to setjmp. After longjmp is completed, program execution continues as if the corresponding invocation of setjmp had just returned. If the value passed to longjmp is 0, setjmp will behave as if it had returned 1; otherwise, it will behave as if it had returned value. struct jmp_buf {... } Contains CPU-specific fields for saving registers, program counter, etc.

18 18 setjmp/longjmp example int main(int argc, void *argv) { int i, restored = 0; jmp_buf saved; for (i = 0; i < 10; i++) { printf("Value of i is now %d\n", i); if (i == 5) { printf("OK, saving state...\n"); if (setjmp(saved) == 0) { printf("Saved CPU state and breaking from loop.\n"); break; } else { printf("Restored CPU state, continuing where we saved\n”); restored = 1; } if (!restored) longjmp(saved, 1); }

19 19 setjmp/longjmp example Value of i is now 0 Value of i is now 1 Value of i is now 2 Value of i is now 3 Value of i is now 4 Value of i is now 5 OK, saving state... Saved CPU state and breaking from loop. Restored CPU state, continuing where we saved Value of i is now 6 Value of i is now 7 Value of i is now 8 Value of i is now 9

20 20 Preemptive vs. nonpreemptive threads How to prevent a single user-level thread from hogging the CPU? Strategy 1: Require threads to cooperate Called non-preemptive threads Each thread must call back into the thread library periodically This gives the thread library control over the thread's execution yield() operation: Thread voluntarily “gives up” the CPU Pop quiz: What happens when a thread calls yield() ??

21 21 Preemptive vs. nonpreemptive threads How to prevent a single user-level thread from hogging the CPU? Strategy 1: Require threads to cooperate Called non-preemptive threads Each thread must call back into the thread library periodically This gives the thread library control over the thread's execution yield() operation: Thread voluntarily “gives up” the CPU Pop quiz: What happens when a thread calls yield() ?? Strategy 2: Use preemption Thread library tells OS to send it a signal periodically A signal is like a hardware interrupt Causes the process to jump into a signal handler The signal handler gives control back to the thread library Thread library then context switches to a new thread

22 22 Which approach is better? Kernel-level threads: Pros: Cons: User-level threads: Pros: Cons:

23 23 Kernel-level threads Pro: OS knows about all the threads in a process Can assign different scheduling priorities to each one Kernel can context switch between multiple threads in one process Con: Thread operations require calling the kernel Creating, destroying, or context switching require system calls

24 24 User-level threads Pro: Thread operations are very fast Typically 10-100x faster than going through the kernel Pro: Thread state is very small Just CPU state and stack, no additional overhead Con: If one thread blocks, it stalls the entire process e.g., If one thread waits for file I/O, all threads in process have to wait Con: Can't use multiple CPUs! Kernel only knows about one CPU context Con: OS may not make good decisions Could schedule a process with only idle threads Could deschedule a process with a thread holding a lock

25 25 Threads programming interface Standard API called POSIX threads int pthread_create(pthread_t * thread, pthread_attr_t * attr, void *(*start_routine)(void *), void * arg); thread : Returns a pointer to the new TCB attr : Set of attributes for the new thread Scheduling policy, etc. start_routine : Function pointer to “main function” for new thread arg : Argument to start_routine()‏ void pthread_exit(void *retval); Exit with the given return value int pthread_join(pthread_t thread, void **thread_return); Waits for “thread” to exit, returns return val of the thread

26 26 Using Pthreads #include int sum; /* this data is shared by the thread(s) */ void *runner(void *param); /* the thread */ int main(int argc, char *argv[]){ pthread_t tid; /* the thread identifier */ pthread_attr_t attr; /* set of attributes for the thread */ pthread_attr_init(&attr);/* get the default attributes */ pthread_create(&tid,&attr,runner,argv[1]);/* create the thread */ pthread_join(tid,NULL); /* now wait for the thread to exit */ printf("sum = %d\n",sum); } void *runner(void *param){ /* The thread will begin control in this function int i, upper = atoi(param); sum = 0; if (upper > 0) { for (i = 1; i <= upper; i++) sum += i; } pthread_exit(0); }

27 27 Thread Issues All threads in a process share memory: What happens when two threads access the same variable? Which value does Thread 2 see when it reads “foo” ? What does it depend on? Thread 0 Thread 2 Thread 0 Thread 1 Address space foo write read

28 28 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY Threads and Synchronization Topics: Using threads Implementation of threads Synchronization problem Race conditions and Critical Sections Mutual exclusion Locks Spinlocks Mutexes

29 29 Single and Multithreaded Processes

30 30 Synchronization Threads cooperate in multithreaded programs in several ways: Access to shared state e.g., multiple threads accessing a memory cache in a Web server To coordinate their execution e.g., Pressing stop button on browser cancels download of current page “stop button thread” has to signal the “download thread” For correctness, we have to control this cooperation Must assume threads interleave executions arbitrarily and at different rates scheduling is not under application’s control We control cooperation using synchronization enables us to restrict the interleaving of executions

31 31 Shared Resources We’ll focus on coordinating access to shared resources Basic problem: Two concurrent threads are accessing a shared variable If the variable is read/modified/written by both threads, then access to the variable must be controlled Otherwise, unexpected results may occur We’ll look at: Mechanisms to control access to shared resources Low-level mechanisms: locks Higher level mechanisms: mutexes, semaphores, monitors, and condition variables Patterns for coordinating access to shared resources bounded buffer, producer-consumer, … This stuff is complicated and rife with pitfalls Details are important for completing assignments Expect questions on the midterm/final!

32 32 Shared Variable Example Suppose we implement a function to withdraw money from a bank account: int withdraw(account, amount) { balance = get_balance(account); balance = balance - amount; put_balance(account, balance); return balance; } Now suppose that you and your friend share a bank account with a balance of $1500.00 What happens if you both go to separate ATM machines, and simultaneously withdraw $100.00 from the account?

33 33 Example continued We represent the situation by creating a separate thread for each ATM user doing a withdrawal Both threads run on the same bank server system Thread 1 Thread 2 What’s the problem with this? What are the possible balance values after each thread runs? int withdraw(account, amount) { balance = get_balance(account); balance -= amount; put_balance(account, balance); return balance; } int withdraw(account, amount) { balance = get_balance(account); balance -= amount; put_balance(account, balance); return balance; }

34 34 Interleaved Execution The execution of the two threads can be interleaved Assume preemptive scheduling Each thread can context switch after each instruction We need to worry about the worst-case scenario! What’s the account balance after this sequence? And who's happier, the bank or you??? balance = get_balance(account); balance -= amount; balance = get_balance(account); balance -= amount; put_balance(account, balance); Execution sequence as seen by CPU context switch

35 35 Interleaved Execution The execution of the two threads can be interleaved Assume preemptive scheduling Each thread can context switch after each instruction We need to worry about the worst-case scenario! What’s the account balance after this sequence? And who's happier, the bank or you??? balance = get_balance(account); balance -= amount; balance = get_balance(account); balance -= amount; put_balance(account, balance); Execution sequence as seen by CPU Balance = $1500 Balance = $1400 Balance = $1400! Local = $1400

36 36 Race Conditions The problem is that two concurrent threads access a shared resource without any synchronization This is called a race condition The result of the concurrent access is non-deterministic Result depends on: Timing When context switches occurred Which thread ran at context switch What the threads were doing We need mechanisms for controlling access to shared resources in the face of concurrency This allows us to reason about the operation of programs Essentially, we want to re-introduce determinism into the thread's execution Synchronization is necessary for any shared data structure buffers, queues, lists, hash tables, …

37 37 Which resources are shared? Local variables in a function are not shared They exist on the stack, and each thread has its own stack You can't safely pass a pointer from a local variable to another thread Why? Global variables are shared Stored in static data portion of the address space Accessible by any thread Dynamically-allocated data is shared Stored in the heap, accessible by any thread Stack for thread 0 Heap Initialized vars (data segment) Code (text segment) Uninitialized vars (BSS segment) Stack for thread 1 Stack for thread 2 (Reserved for OS) Shared Unshared

38 38 Mutual Exclusion We want to use mutual exclusion to synchronize access to shared resources Meaning: When only one thread can access a shared resource at a time. Code that uses mutual exclusion to synchronize its execution is called a critical section Only one thread at a time can execute code in the critical section All other threads are forced to wait on entry When one thread leaves the critical section, another can enter Critical Section Thread 1 (modify account balance) Adapted from Matt Welsh’s (Harvard University) slides.

39 39 Mutual Exclusion We want to use mutual exclusion to synchronize access to shared resources Meaning: When only one thread can access a shared resource at a time. Code that uses mutual exclusion to synchronize its execution is called a critical section Only one thread at a time can execute code in the critical section All other threads are forced to wait on entry When one thread leaves the critical section, another can enter Thread 2 Critical Section Thread 1 (modify account balance) 2 nd thread must wait for critical section to clear Adapted from Matt Welsh’s (Harvard University) slides.

40 40 Mutual Exclusion We want to use mutual exclusion to synchronize access to shared resources Meaning: When only one thread can access a shared resource at a time. Code that uses mutual exclusion to synchronize its execution is called a critical section Only one thread at a time can execute code in the critical section All other threads are forced to wait on entry When one thread leaves the critical section, another can enter Critical Section Thread 1 (modify account balance) 1 st thread leaves critical section Thread 2 2 nd thread free to enter Adapted from Matt Welsh’s (Harvard University) slides.

41 41 Critical Section Requirements Mutual exclusion At most one thread is currently executing in the critical section Progress If thread T1 is outside the critical section, then T1 cannot prevent T2 from entering the critical section Bounded waiting (no starvation) If thread T1 is waiting on the critical section, then T1 will eventually enter the critical section Assumes threads eventually leave critical sections Performance The overhead of entering and exiting the critical section is small with respect to the work being done within it Adapted from Matt Welsh’s (Harvard University) slides.

42 42 Locks A lock is a object (in memory) that provides the following two operations: –acquire( ): a thread calls this before entering a critical section May require waiting to enter the critical section –release( ): a thread calls this after leaving a critical section Allows another thread to enter the critical section A call to acquire( ) must have a corresponding call to release( ) Between acquire( ) and release( ), the thread holds the lock acquire( ) does not return until the caller holds the lock At most one thread can hold a lock at a time (usually!) We'll talk about the exceptions later... What can happen if acquire( ) and release( ) calls are not paired? Adapted from Matt Welsh’s (Harvard University) slides.

43 43 Using Locks int withdraw(account, amount) { acquire(lock); balance = get_balance(account); balance -= amount; put_balance(account, balance); release(lock); return balance; } critical section Adapted from Matt Welsh’s (Harvard University) slides.

44 44 Execution with Locks What happens when the blue thread tries to acquire the lock? acquire(lock); balance = get_balance(account); balance -= amount; balance = get_balance(account); balance -= amount; put_balance(account, balance); release(lock); put_balance(account, balance); release(lock); acquire(lock); Thread 1 runs Thread 2 waits on lock Thread 1 completes Thread 2 resumes Adapted from Matt Welsh’s (Harvard University) slides.

45 45 Spinlocks Very simple way to implement a lock: Why doesn't this work? Where is the race condition? struct lock { int held = 0; } void acquire(lock) { while (lock->held); lock->held = 1; } void release(lock) { lock->held = 0; } The caller busy waits for the lock to be released Adapted from Matt Welsh’s (Harvard University) slides.

46 46 Implementing Spinlocks Problem is that the internals of the lock acquire/release have critical sections too! The acquire( ) and release( ) actions must be atomic Atomic means that the code cannot be interrupted during execution “All or nothing” execution struct lock { int held = 0; } void acquire(lock) { while (lock->held); lock->held = 1; } void release(lock) { lock->held = 0; } What can happen if there is a context switch here? Adapted from Matt Welsh’s (Harvard University) slides.

47 47 Implementing Spinlocks Problem is that the internals of the lock acquire/release have critical sections too! The acquire( ) and release( ) actions must be atomic Atomic means that the code cannot be interrupted during execution “All or nothing” execution struct lock { int held = 0; } void acquire(lock) { while (lock->held); lock->held = 1; } void release(lock) { lock->held = 0; } This sequence needs to be atomic Adapted from Matt Welsh’s (Harvard University) slides.

48 48 Implementing Spinlocks Problem is that the internals of the lock acquire/release have critical sections too! The acquire( ) and release( ) actions must be atomic Atomic means that the code cannot be interrupted during execution “All or nothing” execution Doing this requires help from hardware! Disabling interrupts Why does this prevent a context switch from occurring? Atomic instructions – CPU guarantees entire action will execute atomically Test-and-set Compare-and-swap Adapted from Matt Welsh’s (Harvard University) slides.

49 49 Spinlocks using test-and-set CPU provides the following as one atomic instruction: So to fix our broken spinlocks, we do this: bool test_and_set(bool *flag) { … // Hardware dependent implementation } struct lock { int held = 0; } void acquire(lock) { while(test_and_set(&lock->held)); } void release(lock) { lock->held = 0; } Adapted from Matt Welsh’s (Harvard University) slides.

50 50 What's wrong with spinlocks? OK, so spinlocks work (if you implement them correctly), and they are simple. So what's the catch? struct lock { int held = 0; } void acquire(lock) { while(test_and_set(&lock->held)); } void release(lock) { lock->held = 0; } Adapted from Matt Welsh’s (Harvard University) slides.

51 51 Problems with spinlocks Horribly wasteful! Threads waiting to acquire locks spin on the CPU Eats up lots of cycles, slows down progress of other threads Note that other threads can still run... how? What happens if you have a lot of threads trying to acquire the lock? Only want spinlocks as primitives to build higher-level synchronization constructs Adapted from Matt Welsh’s (Harvard University) slides.

52 52 Disabling Interrupts An alternative to spinlocks: Can two threads disable/reenable interrupts at the same time? What's wrong with this approach? struct lock { // Note – no state! } void acquire(lock) { cli(); // disable interrupts } void release(lock) { sti(); // reenable interupts } Adapted from Matt Welsh’s (Harvard University) slides.

53 53 Disabling Interrupts An alternative to spinlocks: Can two threads disable/reenable interrupts at the same time? What's wrong with this approach? Can only be implemented at kernel level (why?) Inefficient on a multiprocessor system (why?) All locks in the system are mutually exclusive No separation between different locks for different bank accounts struct lock { // Note – no state! } void acquire(lock) { cli(); // disable interrupts } void release(lock) { sti(); // reenable interupts } Adapted from Matt Welsh’s (Harvard University) slides.

54 54 Peterson’s Algorithm flag[0] = 0; flag[1] = 0; turn; P0: flag[0] = 1; turn = 1; while (flag[1] == 1 && turn == 1) { // busy wait } // critical section …. // end of critical section flag[0] = 0; Adapted from Matt Welsh’s (Harvard University) slides. P1: flag[1] = 1; turn = 0; while (flag[0] == 1 && turn == 0) { // busy wait } // critical section …. // end of critical section flag[1] = 0; The algorithm uses two variables, flag and turn. A flag value of 1 indicates that the process wants to enter the critical section. The variable turn holds the ID of the process whose turn it is. Entrance to the critical section is granted for process P0 if P1 does not want to enter its critical section or if P1 has given priority to P0 by setting turn to 0.

55 55 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block Put the thread to sleep until it can enter the critical section Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 unlocked Lock state Ø 1) Check lock state ??? Adapted from Matt Welsh’s (Harvard University) slides.

56 56 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block Put the thread to sleep until it can enter the critical section Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 Ø 1) Check lock state 2) Set state to locked 3) Enter critical section locked Lock state Adapted from Matt Welsh’s (Harvard University) slides.

57 57 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block Put the thread to sleep until it can enter the critical section Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 Ø 1) Check lock state locked Lock state Thread 2 ??? Adapted from Matt Welsh’s (Harvard University) slides.

58 58 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block Put the thread to sleep until it can enter the critical section Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 1) Check lock state locked Lock state Thread 2 2) Add self to wait queue (sleep) Thread 2 Ø Adapted from Matt Welsh’s (Harvard University) slides.

59 59 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block Put the thread to sleep until it can enter the critical section Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 1) Check lock state locked Lock state Thread 3 2) Add self to wait queue (sleep) Thread 2 ??? Thread 3 Adapted from Matt Welsh’s (Harvard University) slides.

60 60 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block Put the thread to sleep until it can enter the critical section Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 1) Thread 1 finishes critical section locked Lock state Thread 2Thread 3 Adapted from Matt Welsh’s (Harvard University) slides.

61 61 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block Put the thread to sleep until it can enter the critical section Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 1) Thread 1 finishes critical section 2) Reset lock state to unlocked Thread 2Thread 3 3) Wake one thread from wait queue unlocked Lock state Thread 3 Adapted from Matt Welsh’s (Harvard University) slides.

62 62 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block Put the thread to sleep until it can enter the critical section Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 3 can now grab lock and enter critical section Thread 2Thread 3 locked Lock state Adapted from Matt Welsh’s (Harvard University) slides.

63 63 Limitations of locks Locks are great, and simple. What can they not easily accomplish? What if you have a data structure where it's OK for many threads to read the data, but only one thread to write the data? Bank account example. Locks only let one thread access the data structure at a time. Adapted from Matt Welsh’s (Harvard University) slides.

64 64 Limitations of locks Locks are great, and simple. What can they not easily accomplish? What if you have a data structure where it's OK for many threads to read the data, but only one thread to write the data? Bank account example. Locks only let one thread access the data structure at a time. What if you want to protect access to two (or more) data structures at a time? e.g., Transferring money from one bank account to another. Simple approach: Use a separate lock for each. What happens if you have transfer from account A -> account B, at the same time as transfer from account B -> account A? Hmmmmm... tricky. We will get into this next time. Adapted from Matt Welsh’s (Harvard University) slides.

65 65 Next Lecture Higher level synchronization primitives: How do to fancier stuff than just locks Semaphores, monitors, and condition variables Implemented using basic locks as a primitive Allow applications to perform more complicated coordination schemes Adapted from Matt Welsh’s (Harvard University) slides.

66 66 Next Week.. Next Lecture: Synchronization How do we prevent multiple threads from stomping on each other's memory? How do we get threads to coordinate their activity? This will be one of the most important lectures in the course...


Download ppt "1 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:"

Similar presentations


Ads by Google